General Introduction to Essential Knowledge,  Pearson Longman Press, 2004, by Steven Luper


The subject of this book is epistemology. Epistemology is the theory of knowledge, the study of the nature, sources, and limitations of knowledge and justification. In studying the nature of knowledge and justification, theorists typically try to delineate the conditions that must be met for a given person to know, or justifiably believe, that a given proposition is true. That is, they offer analyses of knowledge and justification. In this introduction, we will briefly describe the task of analysis, and review some of the ways people have understood epistemic concepts. We will also outline some of the difficulties theorists have confronted while working out what may be known.

The Analysis of Knowledge

Sometimes when people speak of knowledge, they mean to refer to various skills or abilities, such as are displayed when we know how to perform some task (Ryle, 1949). For example, I know how to ride a bicycle. This type of knowledge we might refer to as ability knowledge. A different sort of knowledge is involved in knowing that something is the case. For example, I know that snow is white. This we might call propositional knowledge. Epistemologists are interested primarily in propositional knowledge as opposed to ability knowledge.

As usually understood, a proposition is an abstract object; it is that which a declarative sentence expresses. For example, the words Snow is white express the proposition that snow is white, and the same proposition is expressed by Schnee ist weiss. Propositions purport to describe the world, and true propositions do so accurately. Moreover, when you and I believe that something is the case, we stand in the relationship of belief to a proposition.

Epistemologists, like mathematicians, often use symbols to make their ideas clear. Most of us recall letting the letter ‘c’ stand for the longest length of a right triangle, and ‘a’ and ‘b’ for the lengths of the other two sides, so that we can express the formula, a2 + b2 = c2, which holds for any right triangle, regardless of its size. Epistemologists do something similar when they use the ‘p’ (or ‘P’) to stand for an arbitrary proposition, and the symbol ‘S’ to stand for a person or subject. When we want to distinguish among propositions, we can use ‘p’ for one and ‘q’ or ‘r’ and so forth for the other. Accordingly,

S believes p

means that an arbitrary person S believes that an arbitrary proposition p is true.

Knowledge (and justified belief) can be classified in terms of the types of propositions believed. For example, there is knowledge of necessary truths, such as 2 + 2 + 4, which are propositions that cannot fail to be true, and knowledge of contingent truths, such as Bush is President, which are propositions that, while true, might have been false. Philosophers sometimes express the distinction between necessity and contingency in terms of possible worlds. A necessary truth is a proposition that holds in all possible worlds. A contingent truth is a proposition that holds in only some possible worlds. (What is a possible world? Well, one example is the actual world. Imagine setting out a complete description of the world, a set of statements that lay out, in a completely accurate way, everything that was, is and will be the case. This complex description says everything there is to say about the actual world. It also characterizes a possible world, in that none of the statements in the description is inconsistent with the others. But the actual world is not the only possible world. We can arrive at other possible worlds by starting with our description of the actual world and altering some of the statements in its description, being careful not to introduce contradictions. For example, there is a possible world that differs from the actual world in that you did not read this book.)

Knowledge can also be classified in terms of its sources. A posteriori or empirical knowledge is based on experience. The source of a priori or nonempirical knowledge is reason.

When theorists attempt to clarify the conditions under which knowledge and justification exist, they are trying to set forth the essential ingredients in knowledge and justification. They are offering ways of completing the blanks in the following two schemata:

S knows p if and only if ______ is the case.

S’s belief p is justified if and only if ______ is the case.

Let’s break these schemata down a bit more by clarifying the phrase ‘if and only if’.

Saying something of the form,

p if and only if q

is simply another way of asserting that both

p if q


p only if q

are true. By way of illustration, consider the following two claims, which are true:

(a) I am a bachelor if I am an unmarried male.

(b) I am a bachelor only if I am an unmarried male.

Claim (a) means the same thing as

If I am an unmarried male, then I am a bachelor.

Claim (b), on the other hand, means the same thing as

If I am a bachelor, then I am an unmarried male.

So it follows from (a) and (b) together that I am a bachelor if and only if I am an unmarried male.

In discussing analyses of knowledge, it is useful to use another bit of terminology: the terms necessary and sufficient condition. To say proposition p is a necessary condition for proposition q is to say that if p is false then so is q. For example, being over 5 feet tall is a necessary condition for being 6 feet tall. And to say that proposition p is a sufficient condition for proposition q is to say that if p is true then so is q. An example: being 6 feet tall is a sufficient condition for being over 5 feet tall. When theorists fill in the blank of the schema

S knows p if and only if ______ is the case,

they are trying to supply conditions that are individually necessary and jointly sufficient for S to know p.

To better familiarize ourselves with the task of analysis, let us consider some proposed conditions for knowledge—some proposals for filling in the above blank.

The Standard Conditions for Knowledge

What conditions must be met for knowledge to exist? Three suggestions come to mind: the truth condition, the belief condition, and the justification condition. Let us discuss each in turn.

The Truth Condition. Almost everyone will grant that if we know a proposition is true, then it is true. That is, the following is a necessary condition for S to know p:

p is true.

Call this the truth condition.

But what is truth? The matter is quite controversial, but there are three main views. First, the correspondence theory (Moore 1953, Russell 1910) says that truth is a relationship between a proposition (or sentence) and the world whereby elements of the former in some way correlate to elements of the latter. Unfortunately, it has proven difficult to state clearly what sort of correspondence constitutes truth. Some theorists try to make do with a principle suggested by Alfred Tarski (1944). Tarski pointed out that however truth is defined, it must be consistent with the following condition: the statement, ‘Snow is white’ is true if and only if snow is white; the statement, ‘Coal is black’ is true if and only if coal is black; and so on, for all statements. Notice that when ‘Snow is white’ is placed within quotation marks, we refer to a statement; otherwise, it refers to the state of affairs of snow being white. Thus Tarski’s principle suggests to some theorists that what makes statements true is a corresponding state of affairs, but the matter is controversial.

A second view called the coherence theory of truth (Blandshard 1939) says that statements are made true by cohering with others to form a comprehensive worldview. However, advocates have had a great deal of trouble in clarifying the notion of coherence without drawing on the notion of truth, which would make their account ultimately circular. An analysis or definition is circular when it uses the very term it purports to analyze or define. If, when asked to define the term ‘ixapitl,’ I say it refers to any bird that has the property of ixapitlness, my account is unhelpful because it is circular. If coherentists were to draw on the idea of truth when they clarify ‘truth,’ no one without a prior acquaintance with the idea of truth could understand the account that is supposed to clarify the idea of truth. Coherentists have also been plagued by the charge that there are too many coherent schemes of claims, each mutually inconsistent. If two or more schemes are mutually inconsistent, they cannot all be true.

A third account, the pragmatic theory, comes in at least two versions. The first, developed by Charles Peirce (1877b), says that "the opinion which is fated to be ultimately agreed to by all who investigate, is what we mean by the truth." A second, defended by William James (1896), says statements are made true when accepting them helps us to accomplish useful goals, such as the aim of making sense of experience. Many charge that such pragmatic criteria for truth are too weak: can’t we imagine false claims that would be accepted by inquirers even in the long run? Aren’t there false claims that would serve all sorts of useful goals?

While the nature of truth is very controversial, there is widespread agreement that truth is necessary for knowledge. The truth condition implies that we can never know a claim is true if in fact it is false. And this seems plausible. If the earth is not flat, then I do not know that it is flat.

If we object to the truth condition, the chances are we are confused about its implications. Consider, for example, the worry that the truth condition makes it impossible to decide whether we know something, since, in order to know it, we must first establish that it is true, which is tantamount to establishing that we know it! To see where this objection goes wrong, we must distinguish between the conditions under which a claim is true, and a practical procedure for verifying that the claim is true. Consider, as an example, the claim that I am an alien—an extraterrestrial life form, not a citizen of a different country—who looks and acts just like a human being. As a condition for my being an alien, we might use this: I am a member of a life form that evolved outside the earth. Meeting this condition seems to be a necessary and a sufficient condition for my being an alien. But it does not supply a practical way for you to verify that I am an alien if I have clever ways of hiding my origins. Similarly, there can be conditions for knowing p that are difficult in practice to verify. In analyzing knowledge, we are not attempting to set out a practical test for knowledge.

Related points are these: first, truth is not a condition that we must meet in order to know that a proposition is true. Rather, it is a condition that the proposition we believe must meet. Second, in a certain way, I know my name is Steve is like I killed Fred: the logic of success applies to the concepts of knowledge and killing. No matter what I do, I have not killed Fred unless he really is dead; similarly, no matter what I do, I do not know that something is true unless I get it right.

The Belief Condition. If we know p, then we believe p. That is, S knows p only if the following condition is met:

S believes p.

The belief condition is widely accepted, but a bit more controversial than the truth condition. Some have argued that knowledge entails states of mind that are very similar to, but not quite the same as belief. Some of the proposed substitutes are states of psychological certainty (Ayer 1956), conviction (Lehrer 1974), and acceptance (Lehrer 1989). And some challenge the belief condition on the grounds that knowing and believing are incompatible (Plato, Duncan-Jones 1938) or at least separable (Radford 1966). Duncan-Jones points out that we say things like, "I don’t believe the trade center towers were destroyed, I know it!", which might suggest that ‘belief’ applies only when an opinion is held on weak grounds, so that a belief state is not compatible with knowledge. But examples like this hardly show that believing excludes knowing. It is still possible that knowing involves believing plus something else, such as good evidence.

So far we have mentioned two conditions, each necessary for knowledge. Let us pause a moment and ask the following question: are the truth and belief conditions jointly sufficient for knowledge? If we say that these two conditions jointly suffice for knowledge, we would be making an assertion of the following sort:

If p is true and S believes p, then S knows p.

Is this correct? No, we can easily see that it is not. Imagine that we are lost in a desert and we decide whether there is water just beyond a hill to the north by tossing a coin, telling ourselves that if the coin comes up heads then there is water beyond the hill, and otherwise there is not. We toss the coin; it comes up heads, so we believe that there is water. By dumb luck, it turns out there is water beyond the hill. But it is clear that we do not know there is water over the hill. Our belief is true, but only by sheer coincidence. For a belief to count as knowledge, it is not enough that it be true.

The Justification Condition. What more is required? The matter is controversial. But a preliminary answer is this: Our belief p must be justified. We must have substantial grounds for believing p. Admittedly, a person who predicts the presence of water on the basis of a coin toss can be said to have ‘grounds’ for believing what they do. However, these grounds are poor indeed, and it takes substantial grounds to position ourselves for knowing things.


Justification is clearly enough an ingredient of knowledge, but the notion of justification is itself the subject of much debate in epistemology. At the heart of the controversy are two questions: What makes something justification? and, What is the structure of justification?

What Makes for Justification?

One way to answer the question, ‘What constitutes justification? is to relate it back to the analysis of knowledge. Whatever else justification is, it seems clear that it is that which converts a true belief into knowledge. So what must be added to a true belief if it is to count as knowledge?

A great many people (for example, Feldman and Conee 1985, 2003) think of justification as evidence that a proposition is true. This longstanding view is sometimes called evidentialism. The notion of evidence is sometimes cashed out in terms of entailment, for generally we will say that e is evidence for p when e entails p; that is, when p cannot possibly be false if e holds. But we rarely have this kind of evidence for our beliefs. An alternative is that e is evidence for p when e is the best explanation that p is true. For example, if the best explanation of facts about the crime scene (shoe prints left outside the broken window, a written will involving lots of money, and so on) is that Jeeves the butler murdered Mrs. Bigbucks, we are justified in believing that Jeeves did it. Following up on this idea, theorists (for example, Quine and Ullian 1970) have tried to clarify the conditions under which one explanation is superior to another.

Some theorists try to elucidate the idea of justification by emphasizing its normative or evaluative dimension (Alston 1985). Perhaps, for example, we are justified in believing p when it is epistemically permissible (permissible insofar as we adopt the role of knowers) for us to believe p, which means roughly that we violate no epistemic rules when we believe p. Here justification is treated as a deontological concept, a concept concerned with duty. Or perhaps we are justified in believing p when believing p facilitates our achieving some suitable epistemic aim, such as the goal of constructing an accurate view of reality, or the goal of maximizing our true beliefs while minimizing our false beliefs. Here justification is treated as a teleological concept, a concept concerned with goal attainment.

Finally, some theorists (Goldman 1976b) attempt to explain justification in terms of reliable methods or processes that generate or support beliefs. The main idea is that a method of belief formation is reliable when it tends to produce mostly true beliefs rather than false ones, and a belief is justified when it is generated by a reliable method of belief formation. This is known as the reliabilist theory of justification.

The Structure of Justification

However we understand its ingredients, we face questions about the structure of justification. Chief among these is the question of whether epistemically viable beliefs rest on a foundation. Theorists called foundationalists say respectable beliefs must have a foundation, while others called coherentists say they need not.

Foundationalism. Traditionally, philosophers have held that epistemically respectable beliefs are arranged in a hierarchy in which higher beliefs depend for their justification on lower beliefs (Descartes 1641, Quinton 1973, Chisholm 1966, 1982, Alston 1976, Audi 1993). (For example, your belief that you are reading a book seems to rest on more fundamental beliefs, such as your seeming to see a book, your seeming to see words on its pages, and so forth.) The exception is the beliefs at the base of the hierarchy: they are justified, but not in terms of any other beliefs. (Your belief that you seem to see a book might be an example of a basic belief.) These basic beliefs are called basic, or foundational beliefs, and the kind of justification they possess is called noninferential justification (or basic or foundational or self justification). Derivative beliefs are called nonbasic (or nonfoundational) and the form of justification a belief acquires from other beliefs is called inferential justification (or nonbasic or nonfoundational justification). The traditional, foundationalist, view is that some of our beliefs have sufficient noninferential support to qualify as adequately justified, and that the justification of basic beliefs can be transferred to nonbasic beliefs, making adequate inferential justification possible. The first foundationalists, such as René Descartes, insisted that basic beliefs are incorrigible: we cannot be wrong when we believe them. Contemporary foundationalists rarely defend incorrigibilism; they are more likely to be fallibilists who say that basic beliefs have enough justification to support other claims but not enough to completely rule out the possibility of error. But for both incorrigibilists and fallibilists, justification is linear, since each belief is supported by the beliefs on which it rests and ultimately by basic beliefs.

Foundationalists face the task of clarifying how basic justification is possible, and how (and which) nonbasic beliefs can be justified in terms of basic beliefs. Often, foundationalists try to explain basic justification in terms of the relationship between beliefs and certain (other) psychological states. For example, my seeming to see a table, which is a psychological state, could be said to support my belief that a table is before me. But critics attempt to impale this explanation on the horns of a dilemma, as follows: either the relationship between a psychological state and the belief it ‘supports’ is inferential (I infer there is a table on the basis of being in the psychological state of seemingly seeing a table) or it is causal (seemingly seeing a table causes me to believe there is a table). If the former, then foundationalists have not really given us an account of basic justification, for an inferred belief is not basic. But if the relationship is causal, then the foundationalist’s explanation fails, since the causal relationship is not a justificatory relationship. After all, a certain sort of blow to the head might cause me to see believe there are tables in front of me (or stars before my eyes), but undergoing such a blow does not justify me in believing anything about tables.

Foundationalists have also offered the reliabilist account of basic justification, according to which the justified status of basic beliefs derives from the reliability of their source. A basic belief is said to be justified by the reliability of its source, not by the further belief that the belief source is reliable. So a belief can be justified by the reliability of its source even if we are wholly unaware of the source’s reliability. But critics question the view that justification can derive from aspects of the world of which we are unaware. For example, it seems possible to imagine that Claire the Clairvoyant truly has a reliable source for beliefs of a certain sort—say beliefs about her dog. If people like Claire existed, the reliabilist would have to say that their (dog) beliefs are justified. But unless she has grounds for thinking that she has such a reliable source, it is worrisome to say that her beliefs really are justified. And, of course, if she does have grounds, her beliefs would then be inferential, not basic.

Coherentism. Many (perhaps most) epistemologists are unhappy with the idea of basic justification. According to the coherentist theory (Bosanquet 1920, Sellars 1973, BonJour 1985), justification is structured not as an edifice but rather as a web or mesh. The whole web is justified because of many justificatory interconnections among the beliefs constituting the web, and individual beliefs are justified because they are part of such a web. The tighter the justificatory interconnections among the component beliefs, the more coherent the web as a whole is said to be. On this view, the justification of individual beliefs is circular rather than linear. For while belief 1 might receive support from belief 2, and 2 from 3, belief 3 might also receive support from belief 1. But coherentists maintain that circular justification can be proper, so long as the beliefs form a substantial and tightly woven system, like the threads of a web, or, like Haack suggests, like the pieces of a complex puzzle. The task for coherentists is to clarify which sorts of interrelationships constitute coherence, and to make it clear why such interrelationships constitute support. While coherentism is a work in progress, there is agreement that coherence is enhanced by the relations of entailment and explanation: coherence is enhanced, that is, to the extent that the beliefs in a web are mutually explanatory, or mutually entailing. Still, critics charge that a great many sets of beliefs are, individually, highly coherent; collectively, these sets of beliefs are mutually incompatible, and hence not all of them can be rationally acceptable.

Typically, coherentists say that a belief is justified only if there are positive grounds (derived from other beliefs in the web) for thinking it is true; that is, only if holding it makes the web significantly more coherent. This view might be called positive coherentism. But some theorists (Peirce 1877, Harman 1986) say that a belief is justified so long as there are not positive grounds for thinking it is false; that is, a belief is justified unless holding it makes the web significantly less coherent. This position can be called negative coherentism.

Contextualism. Some theorists (Wittgenstein 1969, Annis 1978) claim that some groundless beliefs (ones that lack any justification, whether basic or nonbasic), which are selected by the context, can justify other beliefs. Thus, for example, given the context of home construction, carpenters take it for granted that wood will continue to be rigid and will not suddenly become as weak as cotton candy; they do not need to justify their beliefs about the fundamental properties of wood for these beliefs to serve as justifying grounds for other beliefs, such as the belief that a certain configuration of two-by-fours will support a roof. This view might be called structural contextualism.

Structural contextualism is not the view that, in the modern philosophical literature, most commonly bears the name ‘contextualism. As we shall see in Chapter 4, the name ‘contextualism’ generally refers to one version of the view that the standards which a given person S’s beliefs must meet in order to qualify as justified or known vary from context to context. For ordinary purposes, such as my efforts to landscape my house, relatively weak standards must be met if I am to know such things as that the shrub outside my window is a persimmon tree, but in the context of a scientific investigation, tighter standards are appropriate. Some theorists, whom we might call agent-centered contextualists, say that the applicable standards vary depending on the context of the person (say, S) whose belief is assessed. Other theorists (Lewis 1979, 1996; Cohen 1988, 1998; DeRose 1995), whom we may call speaker-centered contextualists, say that the applicable standards vary depending on the context of the speaker, the person who assesses the epistemic status of S’s beliefs. That is, whether it is correct for you to say that I know there is a persimmon tree outside my window depends on facts about your situation, rather than mine. If you are a fastidious scientist, perhaps it would be false for you to say that I know I own a persimmon, but for me, a lay person, it is correct to say that I own a persimmon. Almost all theorists now accept agent-centered contextualism, and say that our epistemic apparatus must be more versatile in some circumstances than others in order to produce knowledge. But the term ‘contextualism’ is usually associated with speaker-centered contextualism. Speaker-centered contextualism remains highly controversial.


Do the truth, belief, and justification conditions constitute necessary and sufficient conditions for knowledge? That is, can we accept the following analysis of knowledge:

S knows p if and only if the following conditions are met:

p is true

S believes p

S’s belief p is justified?

It is a versatile account—so powerful, in fact, that it is known as the standard analysis of knowledge. However, it faces substantial difficulties. Chief among these is the charge that so-called Gettier cases show that the standard analysis is too weak. That is, Gettier cases show it is possible to believe a true proposition on eminently good grounds yet fail to know the proposition is true. In response to Gettier cases, we will want to adopt an account of knowledge that is more demanding than the standard analysis. But when we strengthen our account, we must try to avoid a second difficulty: if we adopt conditions for knowledge that are very difficult to meet, we will find it harder to resist certain powerful skeptical arguments that support the idea that we truly know, or even justifiably believe, little if anything.

Gettier Cases

Edmund Gettier (1963) drove home the point that the standard analysis is too weak (although his reasoning was hinted at by others, such as Russell 1912). For Gettier was able to show that we can have substantial grounds for believing something yet still be correct only by accident (Unger 1968). Consider the following situation (the example is Keith Lehrer’s (1965) modification of Gettier’s own illustration):

The Ford Case: suppose I were in my office, along with a fellow named Mr. Nogot, who has just shown me a document certifying that he owns a Ford. I would have excellent grounds for believing the following proposition:

(1) Mr. Nogot is in my office and he owns a Ford.

Suppose, further, that I notice that (1) entails the following:

(2) Someone in my office owns a Ford.

Because it follows from (1), I come to believe (2), and have excellent grounds for doing so. So according to the standard analysis, I would know that (2) is true—I would know someone in my office owns a Ford. Now let us embellish the case a bit: suppose that, as it turns out, the car Nogot used to own has been stolen and completely destroyed.

So Nogot does not own a Ford; (1) is false. Yet I would still have good reason for thinking that (1) is true, and hence that (2) is true. According to the standard analysis, I would still know that (2) is true. Yet it is obvious that I would know no such thing. Apparently, the conditions that make up the standard analysis are not sufficient for knowledge.

Internalism versus Externalism

Contemporary epistemologists have attempted to strengthen the standard analysis in such a way as to render it adequate to handle cases like Gettier’s. In doing so, however, they have found it tempting to abandon old ways of thinking about the discriminatory powers we need to achieve knowledge or justified belief. Traditionally, our discriminatory powers were understood in terms of evidence that is directly accessible; that is, the evidence which allows us to select among a belief p and the alternatives to p was assumed to be available through introspective access. The traditional view is internalist: the conditions that determine whether we are justified in believing something are accessible from the inside, from within our cognitive perspective, accessible in the (presumptively error-free) way that pains, beliefs, and desires are accessible. This view may be called justification internalism. The corresponding doctrine as applied to knowledge is knowledge internalism: the factors that determine whether we know something are accessible from within our cognitive perspective.

Gettier’s challenge has led many theorists to abandon knowledge internalism. Some abandon both knowledge and justification internalism. Contemporary theorists have given externalist accounts of the discriminatory powers that make knowledge or justified belief possible. They say that the relevant powers depend on features of belief-formation processes (and the circumstances of their use) that are external to the epistemic agent’s internal perspective. For example, externalists will insist, primitive people who saw a tiger would have known there was a tiger in front of them so long as their visual apparatus was functioning properly, and so long as it was used in the sort of circumstances appropriate to the identification of large animals. Yet early human beings had little grasp of the factors that allow the visual process to work and little idea about the circumstances under which it provides accurate information. Knowledge and perhaps even justified belief are made possible by the reception of information about the world even if we have no good perspective on how that reception is possible.

The four leading responses to Gettier all abandon knowledge internalism. These are the causal theory, the defeasibility theory, the true lemmas view, and the reliabilist theory. (The injunctions against defeaters and against false lemmas are externalist conditions.) Let us discuss each of these theories.

The Causal Theory

When confronted with the Ford case, and asked why I would fail to know that someone in my office owns a Ford, causal theorists (Grice 1961, Goldman 1967) say that my belief lacks the appropriate sort of cause. The appropriate cause involves a chain of events, a series in which each event causes the next, like a row of dropping dominos. For example, the event of my hearing a certain sound was caused by the event of sound waves moving through the air, which was in turn caused by rocks falling down the face of a cliff, and so on. Our coming to believe something is itself an event brought about by a previous chain of occurrences. According to causal theorists, if I am to know p, my belief p must be brought about by a causal chain that includes the fact p, where the fact p is the state of affairs that makes the proposition p true. I know there is a table in front of me because the fact that there is a table in front of me causes light with certain characteristics to stimulate my visual receptors, resulting ultimately in my belief that there is a table in front of me. Thus, simplifying the view somewhat, the causal theory is this:

S knows p if and only if the following conditions are met:

p is true

S believes p

The fact p (is part of a causal chain that) causes S to believe p.

Consider how the causal theory would handle the Ford case. What makes p true is the fact that Mr. Havit, who is in my office, owns a Ford. But this is not the cause of my belief p. Instead, my belief is caused by facts about Mr. Nogot, who does not own a Ford.

Is the causal theory adequate? Apparently not, as a new case will show.

The Papier-Mâché Barns Case: Suppose I am driving down the road and see a barn. I come to believe that there is a barn to my left. However, unknown to me, I am looking at the only real barn in the area. I have wandered into a region in which a crazy artist (Christo’s dumber younger brother) has set out hundreds of papier-mâché barns that, visually, cannot be distinguished from the real thing.

So far as the causal theory is concerned, I would know that there is a barn to my left. Intuitively, however, this result is incorrect; in an area filled with fake barns, I would not know I am seeing a barn even if am; I am correct about the barn, but only by accident.

The Defeasibility Theory

Proponents of the defeasibility theory (Lehrer and Paxson 1969, Sosa 1969, Klein 1971) suggest that in Gettier cases evidence that normally would count as adequate justification for believing a proposition p is undermined or defeated by some true proposition about our situation. They then specify what is involved when a proposition defeats a justification. According to one simple version of the view, a proposition d defeats person S’s justification e for believing p if and only if

(a) d is true, and

(b) the conjunction of e and d (that is, e and d) does not completely justify S in believing p.

Knowledge they analyze in terms of defeasibility, as follows:

S knows p if and only if the following conditions are met:

S believes p

S is justified in believing p

(3) S’s justification e is not (capable of being) defeated.

Notice that not-p, if true, automatically defeats our justification for believing p, no matter what that justification might be. So condition 3 entails that p is true, which renders a truth condition redundant.

The defeasibility account has no difficulty dealing with the Ford case. In that case, S’s belief that someone in S’s office owns a Ford was inferred from S’s belief that Mr. Nogot has a Ford. But Mr. Nogot doesn’t own a Ford, and this fact defeats the evidence S has for believing what S believes. The defeasibility theory can also handle the papier-mâché barns case; there the defeating proposition is the fact that the region I am in is filled with fake barns that cannot easily be distinguished from the real thing.

So the Defeasibility Theory is not without virtues. Unfortunately, it is not without flaws as well. The problem with the account as we’ve stated it so far is that it is implausible to claim that any true statement can be allowed to defeat the justification we might have for our known beliefs. Some truths are misleading, and they ruin a justification without undermining S’s knowledge. Lehrer and Paxson (1969) provide a straightforward example.

The Tom Grabit Case: Suppose I see Tom Grabit make off with a library book and am therefore justified in believing p: Tom Grabit has stolen a library book. Across town and out my earshot, however, Tom’s mother Bernice Grabit has claimed that Tom was not in the library at all; rather, Tom’s twin brother Tim was there. But Bernice is a compulsive liar!

Because Tom’s mother is a compulsive liar and Tom has no brother at all, we would be inclined to say that I know p. But the fact that Bernice has claimed that Tom was not in the library defeats my justification for believing p. It, conjoined with the evidence that was good grounds for accepting p, does not justify me in believing p.

Dutifully, defeasibilists suggest revisions of their view to accommodate the Tom Grabit Case, but we shall put these aside, and turn to a third analysis of knowledge.

The True Lemmas View

According to some theorists (Clark 1963, Harman 1968), the key to handling Gettier cases is to supplement the truth condition we discussed earlier with a second truth condition. Let us use the term ‘lemma’ to refer to claims on which we rely in the course of arriving at a conclusion. For example, in the Ford case, in concluding that someone in my office owns a Ford, my reasoning depended on the claim that Mr. Nogot owns a Ford; this latter is therefore a lemma. And it is a false lemma, which suggests that the proper way to analyze knowledge is to insist that our lemmas be true, as in the following analysis, called the true lemmas view:

S knows p if and only if the following conditions are met:

p is true

S believes p

S’s belief p is justified.

All claims essential to S’s justification for believing p (all lemmas) are true.

But does the true lemmas view handle the papier-mâché barns case? It does if we stipulate that, on the way to my belief in the reality of the barn before me, I assumed (perhaps implicitly, that is, without spelling it out to myself) that my circumstances were ones in which barn-appearances were reliable indicators of the presence of barns, for that assumption was false.

How does the view fare with the Tom Grabit case? Arguably, it performs well, assuming that no assumptions about Bernice G. were part of my reasoning.

The Reliabilist Theory

A fourth account of knowledge is called the reliabilist theory (or reliabilism). Reliabilists, such as Armstrong (1973), develop the observation that knowledge is produced by methods of belief formation or sustenance that are in some sense reliable. They focus on methods or processes of forming or sustaining beliefs. The visual process, for example, might be a method of forming a belief (as when I come to believe my puppy is at the door by seeing it), or a method of sustaining a belief (as when I continue to believe my dog is staying put by keeping an eye on it). Exploiting this insight, reliability theorists offer roughly the following analysis of knowledge:

S knows p if and only if the following conditions are met:

p is true

S believes p via method M

M is reliable.

Notice that the true lemmas view entails reliabilism given that, as the true lemmas theorist will say, we always assume, either implicitly or explicitly, that the method by which we arrive at a belief is reliable, and given that our belief’s source must truly be reliable if our belief is to count as knowledge. Defeasibilism also entails reliabilism, given that the evidence we might have for a belief is defeated by the fact that we have arrived at our belief through an unreliable method.

The reliability theory gets complicated when its proponents try to clarify what they mean by ‘reliable’. At this point it becomes clear that many versions of reliabilism are possible. One idea of reliability centers on the fact that after producing (or sustaining) many beliefs, some methods come to have good track records for guiding us to the truth. In fact, over the course of their entire performance in the past, present and future, they will have produced substantially more true beliefs than false beliefs. This we might term track record reliability.

Unfortunately, knowledge cannot be analyzed in terms of track record reliability. The main problem involves ways of arriving at beliefs that are rarely used, as in the following illustration.

The Dirty Prediction Case: Suppose that I base my belief that the winning numbers for the next state lottery are 3, 18, 63, 33, 41 and 24 on the fact that a certain patch of dirt has some specific bizarre configuration. Suppose, too, that this configuration will occur just once in all of time—I’m seeing it the one and only time it presents itself. Finally, suppose that my belief is correct: these are the winning numbers.

So this method produces just one belief in its entire history, and that belief is correct. Its track record is therefore excellent: the proportion of true beliefs versus false beliefs is as high as it could be! Yet it is obvious that I cannot know the winning state lottery numbers on the basis of some unique and unrepeated configuration of dirt.

In response to the dirty prediction case, let us consider a different type of reliability. Note that even though the dirt-based method of belief formation will in fact be used only once, it might have been used any number of times. Hence we can ask about the track record it would have if, contrary to fact, it were used over and over again. That is, we can ask what goes into the blank in the following counterfactual conditional,

If the dirt-based method were used many times, its track record would be _____.

(A proposition is said to be conditional when it is of the form if p then q. For example, if pigs had wings they could fly is a conditional. A conditional is said to be counterfactual when the ‘if’ clause—the proposition substituted for p—is in fact false. For more on counterfactual and subjunctive conditionals, see Lewis 1973.) :

So what is the answer to our question about the dirt-based method? Clearly, its track record would have been dismal. That is, if, after observing the same configuration of dirt, I were to predict that the winning numbers for the next state lottery will be 3, 18, 63, 33, 41 and 24, I would almost always, if not always be misled. Let us use the term propensity reliability to refer to the property a method of belief formation M has when the following counterfactual conditional is met:

If M were used repeatedly, it would produce substantially more true beliefs than false beliefs.

Can knowledge be defined in terms of propensity reliability?

No, it is still too weak. In the papier-mâché barns case my method of believing that there is a barn to the left—namely the visual process—is reliable in the sense that it would produce mostly true beliefs if used repeatedly, yet it does not position me to know there is a barn to the left. But perhaps we are on the right track anyway. Perhaps what we need to see between a method and the beliefs that it produces is a counterfactual dependence that is strong enough to count as a kind of infallibility within a limited set of circumstances. Perhaps a method M that enables a person S to know p must meet something like the following condition, as several theorists (Dretske 1970 and 1971, Carrier 1971, Goldman 1976a, Nozick 1981) have suggested—or perhaps some version of its contraposition, as others (Luper 1984; Sosa 1999, 2003) have suggested:

if p were false, M would not (in S’s circumstances and in others much like them) lead S to believe that p was true.

When this condition is met, let us say (using Nozick’s helpful terminology) that S’s belief tracks the truth of p via M. Accordingly, tracking theorists analyze knowledge in roughly the following way:

Method M led S to believe p

if p were false, M would not (in S’s circumstances and in others much like them) have led S to believe p.

(The combination of these two conditions implies that p is true, so the truth condition is redundant.)

If we adopt the form of reliabilism developed by tracking theorists, we can handle the papier-mâché barns case. In the situation described, the visual process is my method for believing that there is a barn to my left. Yet I do not track the truth via this method: if there had not been a barn to my left, the visual process still might have led me to believe there was, since I was surrounded by fake barns, and could easily have been misled by one of them. Tracking theorists can also handle the Grabit case, where my method is: seeming to see Tom G. steal a book. Notwithstanding Bernice G.’s lies, if it had been false that Tom stole the book, this method would not have led me to believe it was true, given the circumstances as described in the case.

This last account of knowledge is closely related to a view of knowledge that has been called the relevant alternatives theory, defended by Gail Stine (1976) and attributable to J.L. Austin (1961b). On this view, to know p we must be able to rule out all of the possible alternatives to p that are relevant. Now, not all of the ways in which p might be false are relevant. If I am looking right at a crow in Texas, I can know I see a crow even if I cannot distinguish crows from some sort of black bird that only lives in the Amazon jungle, and even if I cannot tell the difference between crows and the robot birds assembled by the inhabitants of the far planet Crouton, for, in my situation, the possibility that I am seeing the bird from the Amazon or from Crouton is not relevant. But when a possibility is relevant, and inconsistent with p’s truth, then we must be able to rule it out, according to the relevant alternatives view. If, for example, I cannot distinguish crows from grackles, then I do not know that the crow I see really is a crow, for grackles are common in central Texas, and the possibility that I am seeing one is relevant. The chief weakness of the relevant alternatives theory is that it is not obvious what makes an alternative relevant. Still, by combining the relevant alternatives view with a clearer account, such as the tracking theory, this weakness can be overcome (Nozick 1981). For example, we can say that A is a relevant alternative to p if and only if the following counterfactual condition is met: if, in circumstances like S’s, p were false, A would hold. Even if the bird in front of me were not a crow, it would not be a robot bird, so the robot bird possibility is irrelevant. However, if the bird in front of me were not a crow, it might well be a grackle, so the grackle possibility is relevant.

This concludes our sketch of the leading analyses of knowledge. Now let us turn to the topic of skepticism.


One of the most puzzling challenges facing epistemologists is how to respond to skeptical arguments that suggest that we know little if anything (knowledge skepticism) or that we justifiably believe little or nothing (justification skepticism). Global (or radical) skepticism challenges the epistemic credentials of all beliefs, saying that no one knows anything, or no belief is justified. More local skepticism challenges the beliefs of some domain. For example, some skeptics challenge our claim to know about the existence and contents of minds other than our own, and do not deny that we know other sorts of things.

Defenses of Skepticism

To defend their views, skeptics presuppose or defend requirements for knowledge or justified belief, and try to show that these requirements are not met--or cannot be met. Two requirements in particular have seemed necessary to both skeptics and many of their adversaries. To know p, or even to be justified in believing p,

(1) We must have grounds for accepting p, and

(2) These grounds must be discriminating: they must make p more likely than competing alternatives to p, where an alternative to p entails that p is false.

Regress skeptics try to show that neither condition is met. Indiscernability (or Cartesian) skeptics try to show that we lack discriminating grounds for our beliefs. Let us briefly discuss each form of skepticism.

Regress skepticism. Starting with the followers of the ancient philosopher Pyrrho of Elis (c.365-270B.C.), some skeptics assume that to be justified or known a proposition must be supported on the basis of another proposition or chain of propositions, and then argue that no such support is ever possible. For a chain of putative support must (a) begin with propositions that are based on nothing, or (b) circle back on itself, or (c) go on endlessly. However, endless chains of justification are impossible for human beings to grasp or construct, so we can rule out (c). Circular chains, on the other hand, do not provide justification, and neither do chains that begin with arbitrary assumptions; either way—whether we choose possibility (a) or (b)—our beliefs are ultimately groundless, and hence not justified. This defense of skepticism might be called the belief regress argument.

Indiscernability skepticism. Other skeptics base their doubts on our inability to rule out certain possibilities we might call skeptical scenarios. A skeptical scenario has the following peculiarity: whether the scenario holds or not, everything appears the same—either way, we have the same beliefs and perceptual states. For example, consider the possibility that I am in my bed in the midst of a brightly vivid (and wholly boring!) dream that mirrors the events of a normal day in the life of a philosopher. Assuming, as the skeptic does, that evidence or justification is entirely a matter of the ways things appear to us, then no evidence that I possess enables me to distinguish between the skeptic’s possibility (vividly dreaming) and the situation in which I am awake. My evidence is compatible with both possibilities. So I do not know, and am not justified in believing, that I am not dreaming, and it seems to follow that I do not know, and am not justified in believing, a great many of the common sense things I believe, such as that I am awake, in my office, typing, and so forth. This defense of skepticism can be called indiscernability skepticism. It might also be called Cartesian skepticism, after the philosopher René Descartes, who discussed it in his Meditations (1641).

But does my inability to completely rule out the skeptic’s possibilities really undermine the epistemic status of my common sense beliefs? According to some contemporary theorists, it would do so only if the following principle of closure (or principle of entailment) were true:

We know things when we believe them upon seeing that they are entailed by other things we know.

Using this principle, the skeptic can offer the following skeptical argument from closure, which uses skeptical possibilities to undermine the epistemic status of common sense beliefs:

1. If a person S knows one thing, p, and believes something else, q, on the basis of the accurate realization that p entails q, then S knows q (this is the principle of closure).

2. Being at work in my office at school entails that I am not in my bedroom across town dreaming I am at work in my office (it also entails that I am not a brain in a vat on a distant planet, and that various other skeptical scenarios fail to hold).

3. So if I knew I was at work in my office, I would know I was not in my bedroom dreaming (and, similarly, that various other skeptical scenarios fail to hold).

4. But I don’t know that I am not in bed dreaming (since if I were I would still think I am not).

5. So I don’t know I am at work in my office, and for similar reasons I know none of the commonsense claims that are incompatible with my being in bed dreaming.

Responses to Skepticism

Critics of skepticism can take three main approaches. They can target global skepticism and try to show that the skeptic’s thesis is incoherent in some way. This tactic might be called an incoherence response to skepticism. If successful, an incoherence response would allow us to bypass the skeptics’ arguments and reject their conclusion on the grounds that it makes no sense. A second tactic is an indefensibility response, which draws our attention to the tension between the global skeptic’s thesis, which is the denial that any claim is defensible, and its defense. On this approach we say that any attempt to defend global skepticism is condemned from the start by the skeptical thesis itself. A third strategy is to focus on the skeptics’ arguments themselves and offer counterdefense responses. This sort of response targets the skeptic who argues that the requirements for knowledge or justified belief are not satisfied. A counterdefense tries to establish that the requirements are met, or that the skeptics’ arguments do not show that the requirements are not met. For example, the skeptical argument from closure is sometime criticized (Dretske 1970, Nozick 1981) on the grounds that it relies on a false premise, namely the assumption that the principle of closure is true.

While skepticism has been attacked in all of these ways, it remains one of the most highly contested issues in philosophy. In part this is because the term ‘skepticism’ is not limited to any one argument or claim, and those who defeat one skeptical argument or claim will encounter critics who consider skepticism unscathed because other skeptical arguments have not been overcome.

Further Reading

The Truth Condition

Armstrong, D. M. (1973) Belief, Truth, and Knowledge. Cambridge: Cambridge University Press.

Austin, J. L. (1961a) "Truth," in Philosophical Papers. Oxford: Clarendon Press.

Blandshard (1939) The Nature of Thought. London: Allen and Unwin.

Fisch, M., ed. (1951) Classic American Philosophers. New York: Appleton-Century-Crofts, Inc.

James, W. (1896) "The Will to Believe," The New World 5, No. 18: 327-347. Also in Fisch 1951.

Moore, G.E. (1953) Some Main Problems in Philosophy. London: Allen & Unwin.

Peirce, C. (1877b) "How to Make Our Ideas Clear," Popular Science Monthly. D. Appleton and Company, 1877. Also in Fisch 1951.

Russell, B. (1910) Philosophical Essays. New York: Simon and Schuster. Chapter 7.

Tarski, A. (1944) "The Semantic Conception of Truth and the Foundations of Semantics," Philosophy and Phenomenological Research 4 341-375.

The Belief Condition

Ayer, A. J. (1956) The Problem of Knowledge. Harmondsworth: Penguin Books, Ltd.

Duncan-Jones, A. (1938) "Further Questions about ‘Know’ and ‘Think’." Analysis 5.5.

Lehrer, K. (1974) Knowledge. Oxford: Oxford University Press.

----- (1989) "Knowledge Reconsidered." M. Clay and K. Lehrer, eds. Knowledge and Skepticism. Boulder: Westview Press.

Radford, C. (1966). "Knowledge—By Examples." Analysis 27.1: 1-11.

Luper, S. (1998). "Belief and Knowledge," Routledge Encyclopedia of Philosophy(London: Routledge, 1998).


Alston, W. (1971) "Varieties of Privileged Access," American Philosophical Quarterly 9: 223-41.

----- (1976) "Has Foundationalism Been Refuted?" Philosophical Studies 29: 287-305.

----- (1985) "Concepts of Epistemic Justification," Monist.

Annis, D. (1978) "A Contextualist Theory of Epistemic Justification," American Philosophical Quarterly 15: 213-19.

Audi, R. (1993) The Structure of Justification. Cambridge: Cambridge University Press.

BonJour, L. (1985) The Structure of Empirical Knowledge. Cambridge, MA: Harvard University Press.

Bosanquet, B. (1920) Implication and Linear Inference. London: Macmillan.

Chisholm, R.M. (1966) Theory of Knowledge. Englewood Cliffs, N.J.: Prentice-Hall.

----- and Swartz, R.J., eds. (1973). Empirical Knowledge. Englewood Cliffs, N.J.: Prentice-Hall.

----- (1982). The Foundations of Knowing. Minneapolis: University of Minnesota Press.

Cohen, S. (1988) "How to be a Fallibilist," Philosophical Perspectives 2: 581-605.

----- (1998) "Contextualist Solutions to Epistemological Problems: Scepticism, Gettier, and the Lottery," in Australasian Journal of Philosophy 76: 289-306.

----- "Contextualism," this volume.

DeRose, K. (1995) "Solving the Skeptical Problem," The Philosophical Review 104: 1-52.

Descartes, R. (1641) Meditations. In Cottingham, J., et al., trans., The Philosophical Writings of Descartes. Cambridge: Cambridge University Press, 1984.

Feldman, R. and Conee, E. (1985) "Evidentialism," in Philosophical Studies 48: 15-34.

----- (2003) "Evidentialism" this volume.

Foley, R. (2003) "Epistemically Rational Belief as Invulnerability to Self-Criticism," this volume.

Goldman, A. (1976b) "What Is Justified Belief?" in Pappas, G.S., ed. Justification and Knowledge. Dordrecht: D. Reidel, 1-23.

Harman, G. (1986) Change In View. Cambridge, MA: MIT Press.

Quine, W.V.O., and Ullian, J. S. (1970) The Web of Belief. New York: Random House.

Quinton, A.M. (1973) The Nature of Things. London: Routledge & Kegan Paul.

Lewis, D. 1979 "Scorekeeping in a Language Game," Journal of Philosophical Logic 8: 339-59.

----- 1996 "Elusive Knowledge," Australasian Journal of Philosophy 74 549-67.

Peirce, C. (1877) "The Fixation of Belief." Popular Science Monthly. D. Appleton and Company, November, 1877.

Wittgenstein, L. (1969). Anscombe, G.E.M. and von Wright, G.H., trans. On Certainty. New York: Harper Torchbooks.

Analysis of Knowledge

Austin, J. L. (1961b) "Other Minds," in Philosophical Papers. Oxford: Clarendon Press.

Carrier, L. S. (1971) "An Analysis of Empirical Knowledge," Southern Journal of Philosophy 9: 3-11.

Clark, M. (1963) "Knowledge and Grounds: A Comment on Mr. Gettier’s Paper," Analysis 24.2: 46-48.

Dretske, F. (1970) "Epistemic Operators," Journal of Philosophy 67: 1007-23.

----- (1971) "Conclusive Reasons," Australasian Journal of Philosophy 49: 1-22.

Gettier, E. (1963) "Is Justified True Belief Knowledge?" Analysis 23: 121-123.

Goldman, A. (1967) "A Causal Theory of Knowing," Journal of Philosophy 64: 355-372.

----- (1976a) "Discrimination and Perceptual Knowledge," Journal of Philosophy 78: 771-91.

Grice, H.P. (1961) "The Causal Theory of Perception." Proceedings of the Aristotelian Society, Supp. Vol. 35: 121-52.

Harman, G. (1968) "Knowledge, Inference, and Explanation," American Philosophical Quarterly 5: 164-173.

Klein, P. (1971) "A Proposed Definition of Propositional Knowledge," Journal of Philosophy 68: 471-82.

Lehrer, K. (1965) Knowledge, Truth, and Evidence," Analysis 25: 168-175

---- and Paxson, T. (1969) "Knowledge: Undefeated Justified True Belief," Journal of Philosophy 64: 225-37.

Lewis, D. (1973) Counterfactuals. Oxford: Blackwell.

Luper, S. (1984) "The Epistemic Predicament," Australasian Journal of Philosophy 62: 26-48.

Nozick, R. (1981) Philosophical Explanations. Cambridge, MA: Harvard University Press).

Prichard, H.A. (1950) Knowledge and Perception. Oxford: The Clarendon Press.

Ryle, G. (1949) The Concept of Mind. London: Hutchinson.

Russell, B. (1912) Problems of Philosophy. New York: Henry Holt and Co.

Sellars, W. (1973) "Givenness and Explanatory Coherence." Journal of Philosophy 70: 612-24.

Shope, R. K. (1983) The Analysis of Knowing. Princeton: Princeton University Press.

Sosa, E. (1969) "Propositional Knowledge," Philosophical Studies 20: 33-43.

----- (1999) "How to Defeat Opposition to Moore," Philosophical Perspectives 13: 141-152.

----- (2003) "Neither Contextualism Nor Skepticism," in S. Luper, Ed., The Skeptics (Aldershot: Ashgate Publishing, 2003), and this volume.

Stine, G. (1976) "Skepticism, Relevant Alternatives, and Deductive Closure," Philosophical Studies 29: 249-61, and this volume.

Unger, P. (1968) "The Analysis of Factual Knowledge," Journal of Philosophy 65: 157-170.


Stroud, B. (1984) The Significance of Philosophical Skepticism. Oxford: Clarendon Press.

Unger, P. (1975) Ignorance: A Case for Scepticism. Oxford: Clarendon Press.

Klein, P. (2002) "Skepticism," Stanford Encyclopedia of Philosophy,