Induction and Confirmation

We read three interrelated pieces on the problems of induction and their relation to confirmation: Lipton's essay on "induction," Popper's brief defense of falsification as a solution to the problem of induction, and Salmon's extended critique of Popper. (Unfortunately we don't have a detailed piece by Popper to go along with Salmon's detailed critique!)

Important distinctions:

1. The problem of justification vs. the problem of description.

The problem of justification is the problem of how we can justify using any principle of induction at all. This is Hume's famous problem of induction. There doesn't seem to be a non-circular way to defend induction, and a circular defense does not seem helpful. Lipton considers the vivid example of the inductive principle More of the Same, which assumes that the future will resemble the past. An example of its use might go something like this (suppose I am about to attempt to run a current through a sample of copper):

1. Up until now, when I tested samples of copper, they conducted electricity.
2. The future will (probably) resemble the past.
Therefore (probably),
3. This sample of copper will conduct electricity.

Why should we think that 2 is correct? We can't simply observe its correctness, since we can't observe the future. We can't deduce it from facts we already know. We can't determine a priori that it is correct, because it is not a necessary truth: its entirely conceivable that it is incorrect. (For a more detailed discussion, see this handout on Hume's treatment of the problem.) It's hard to think of a justification that doesn't look like this:

1. Up until now, the future has resembled the past.
2. The future will (probably) resemble the past.
Therefore (probably),
3. In the future, the future will resemble the past.

This seems to beg the question (i.e. assume the very thing it is supposed to prove)! Lipton makes this vivid by noting that More of the Same is not the only inductive practice that can claim this kind of circular support. Compare Time for a Change:

1. Up until now, the future has resembled the past.
2. The future will (probably) not resemble the past.
Therefore (probably),
3. In the future, the future will not resemble the past.

The fact that Time for a Change has not been an effective inductive principle until now, if Time for a Change is correct, actually gives us reason to think that it will be an effective principle in the future! (Time for a Change is often called the "principle of counterinduction." Interestingly, there is a chamber ensemble of composers and performers that calls itself "counter)induction"!)

More than two centuries after Hume's death, it's still not clear that there is a decisive answer to his problem of induction, that is, to the problem of justification.

The problem of description is the problem of stating the inductive principles we actually use. This is essentially the problem of when evidence supports a hypothesis. It is surprisingly difficult! The rest of this handout is primarily concerned with the problem of description.

2. Discovery vs. confirmation.

Induction is often thought of as a technique for discovering theories, for deriving theories from observations. One of the great insights shared by Popper and the defenders of the H-D model of confirmation is that this is the wrong way to think about things. The problem of how scientists devise theories in the first place is a psychological problem that it may be difficult or impossible to find a general solution to. The crucial thing is that in a sense it doesn't matter how we arrive at theories in the first place. Maybe there's some generally effective technique, but maybe it requires an act of creative genius for which there is no algorithm. The important thing is how we test them once we've got them. This is the problem of the confirmation of theories by evidence.

In discussion the descriptive problem, Lipton considers the following accounts.

1. Confirmation by Positive Instances (the "instantial model").

In its most basic form this is simply a special case of the H-D model to be described below. The idea is that a general law, e.g. "All ravens are black," is confirmed by its positive instances, e.g. black ravens.  (The H-D model is more general than the instantial model, since the instantial model only addresses generalizations of the form "All A's are B's" where both A and B are observational predicates.)

Problem 1. Goodman's "grue" paradox (alias the "new riddle of induction"). (Lipton, p. 421.)

Let t be Wednesday, February 10, 2011 at 8:30 AM. Suppose I'm going to examine an emerald, e, at 9:00 AM on that date.

definition: an object x is grue if and only if EITHER x is examined before t and it is green, OR x is examined after t and it is blue.

Here is an inductive inference that seems intuitively reasonable. (Suppose that we make this inference on February 9, 2011, at 3:00 PM.)

All the emeralds I have examined so far have been green.
So,
the emerald I examine tomorrow at 9 will be green.

The problem is that the following inference has exactly the same form:

All the emeralds I have examined so far have been grue.
So,
the emerald I examine tomorrow at 9 will be grue.
(So, by the definition of grue, the emerald I examine tomorrow will be blue.)

Problem 2. The paradox of the ravens. (Lipton, p. 421 again.) "All ravens are black" is logically equivalent to "All non-black things are non-ravens" (at least if the conditional involved is truth-functional, which perhaps is not so clear). Symbolically: Ax (Rx -> Bx) is logically equivalent to Ax (~Bx -> ~Rx).

But a "positive instance" of that law is a non-black non-raven. So do non-black non-ravens confirm the hypothesis that all ravens are black?

2. The Hypothetico-Deductive model

We use theories to generate predictions, then check to see whether the predictions are accurate. If they are, the theory is confirmed to some extent.  This fits very neatly with Hempel's Deductive-Nomological model of explanation. It averts Problem 3 of the instantial model. The other two problems, however, remain, since the H-D model implies the instantial model as a special case.

Problem 1. Goodman's paradox. This can be seen as a particularly extreme instance of a more general problem, the problem of alternative hypotheses. Other hypotheses might have generated the same predictions. How do we determine which is best confirmed? (Some, including Kuhn, would answer: by appeal to pragmatic criteria such as simplicity, scope, compatibility with other current theories, etc. Of course, van Fraassen insists that such pragmatic criteria, while important criteria for deciding which theories to accept, have nothing to do with truth or, therefore, confirmation.)

Problem 2. The paradox of the ravens. Also inherited from the instantial model.

Problem 3. Sometimes observing that a prediction is true does not confirm the theory from which the prediction was deduced. (Compare Popper's critique of confirmation.) The main problem here is that some predictions are of things that we would expect to be true even if the theory were false. Examples: (1) theory: Jeanne Dixon can predict the future. Prediction: her predictions will turn out to be true. Problem: these predictions are so vague that a wide variety of events can be interpreted as making them true. So we would expect them to be true even if she can't predict the future. (2) Theory: this homeopathic remedy can cure the common cold. Prediction: if I take it, I will get better in a few days. Problem: most colds clear up in a few days regardless of whether we take anything for them.

[3. Causal Inference

Example: Mill's Methods, the Method of Agreement and the Method of Difference.

I have this in square brackets because we won't directly pursue it further.]

After Lipton's piece, our text contains pieces by Popper and Salmon. Both are focused on a different approach to confirmation, Popper's "falsificationism."

4. Falsificationism (Popper)

Popper's response to the problems with the instantial and H-D models is essentially to deny that there is any such thing as the confirmation of a theory by data. He suggests that theories can be falsified, but can never be confirmed.

Salmon asks: how does this help us to determine which theories to rely on in making predictions about the future? Falsification can tell us which theories not to use, namely, those that have been falsified. But there will always be incompatible possible theories that have not been falsified, and which make conflicting predictions about the future. How can we decide which to make use of? (This is the problem he poses at p. 435.)

Popper's reply: we should prefer those which are (a) informative and (b) have withstood testing so far. (He also introduced a formal technical notion of "corroboration" that was supposed to measure this, but although Salmon mentions this, neither reading goes into the details.)

The rest of Salmon's piece pursues this reply and argues that it really doesn't give us a satisfactory answer. Although Salmon doesn't put it quite this way, the issue is why either (a) or (b) should make us more likely to think that a particular theory will be more successful in making predictions. With regard to (a), the more informative a theory is, the less likely it is to be true, other things being equal. With regard to (b), why should the extent to which we have tested a theory give us any reason to rely on it? Popper insists that extensive testing which a theory passes does not give us a reason to think that the theory is true. So why should we think that it gives us a reason to use it in practice? I won't pursue the twists and turns of this part of Salmon's article.



Last update: February 9, 2011
Curtis Brown | Philosophy of Science | Philosophy Department | Trinity University
cbrown@trinity.edu