The main topics we have covered so far this semester are these:
A. General Criteria of Demarcation.
We have looked at four main approaches to demarcation. Popper suggests that what separates science from nonscience is that scientific claims are falsifiable, whereas nonscientific or pseudoscientific claims are not. Kuhn argues that in fact the distinguishing mark of science is the presence of a puzzle-solving tradition. Ruse argues for a sort of cluster account of status as science: a discipline is more scientific the more it (a) appeals to universal laws, (b) provides explanations and predictions, (c) is testable, and (d) is tentative (i.e. does not make claims to certainty but rather remains open to the possibility of error; "undogmatic" might be a better term). Ruse also mentions a fifth criterion, having practitioners who exhibit "integrity." A more detailed overview of these pieces is here.
We discussed a number of issues related to the creation science controversy. The main reading was the interchange between Ruse and Laudan. I have an overview here.
A. Hempel's D-N model.
B. Ruben's causal model.
C. van Fraassen's pragmatic account.
A detailed summary of Hempel's account, and criticisms of it, along with brief accounts of Ruben and van Fraassen, are here.
A more detailed discussion of van Fraassen's view of explanation is here.
A. Maxwell's defense of realism
B. van Fraassen's defense of a version of antirealism (and description of other versions)
C. Arthur Fine's "Natural Ontological Attitude" or NOA
I have a summary of van Fraassen's view, with reference to Maxwell, here.
A summary of some of the arguments van Fraassen responds to is here.
An overview of Fine's NOA paper here.
We contrasted the syntactic view associated with Hempel and the positivists with the "semantic view" associated with van Fraassen, among others. We read Giere's explanation and defense of the semantic view, and Lloyd's application of it to evolutionary theory. I have some brief notes on this topic here.
Kitcher extends this discussion in an interesting way in "Darwin's Achievement." Kitcher suggests that the theory Darwin presented in the Origin of Species is not best thought of as a set of statements, but rather "as a collection . . . of problem-solving patterns" (p. 176).
A. The Problem of Underdetermination
Duhem: ambiguity of falsification; impossibility of crucial experiment.
HUD (Humean Underdetermination): The view that for any set of observations O, there will be indefinitely many theories that deductively imply those observations.
Laudan: distinguishes between numerous different senses of "underdetermination," and argues that there is no good reason to believe any of the ones that might have relativistic consequences.
Kitcher: when a theory faces apparently conflicting evidence, there are always different possible responses (give up the theory vs. give up a view about initial conditions or auxiliary hypotheses). But it doesn't follow that these alternatives are equally reasonable. We do a kind of cost-benefit analysis to determine which responses are reasonable. The most serious worry about underdetermination is that there might be different, equally reasonable ways of evaluating costs and benefits which would lead us to different conclusions. (This may have been true, for a while, about the conflict between the Ptolemaic and Copernican views of the solar system.)
My summary of some of these issues is here.
B. Confirmation by Positive Instances
This is what Lipton calls "the instantial model." A closely related view is sometimes called "enumerative induction." In its most basic form this simply a special case of the H-D model. The idea is that a general law, e.g. "All ravens are black," is confirmed by its positive instances, e.g. black ravens. (The H-D model is more general than the instantial model, since the instantial model only addresses generalizations of the form "All A's are B's" where both A and B are observational predicates.)
Problem 1. Goodman's "grue" paradox (alias the "new riddle of induction").
Problem 2. The paradox of the ravens. "All ravens are black" is equivalent to "All non-black things are non-ravens" (at least if the conditional involved is truth-functional, which perhaps is not so clear). But a "positive instance" of that law is a non-black non-raven. So do non-black non-ravens confirm the hypothesis that all ravens are black?
Problem 3. This model only permits confirmation of generalizations about observable entities.
We use theories to generate predictions, then check to see whether the predictions are accurate. If they are, the theory is confirmed to some extent. This fits very neatly with Hempel's Deductive-Nomological model of explanation. It averts Problem 3 of the instantial model. The other two problems, however, remain, since the H-D model implies the instantial model as a special case.
Problem 1. Goodman's paradox. This can be seen as a particularly extreme instance of a more general problem, the problem of alternative hypotheses. Other hypotheses might have generated the same predictions. How do we determine which is best confirmed? (Some, including Kuhn, would answer: by appeal to pragmatic criteria such as simplicity, scope, compatibility with other current theories, etc. Of course, van Fraassen insists that such pragmatic criteria, while important criteria for deciding which theories to accept, have nothing to do with truth or, therefore, confirmation.)
Problem 2. The paradox of the ravens. Also inherited from the instantial model.
Problem 3. Sometimes observing that a prediction is true does not confirm the theory from which the prediction was deduced. (Compare Popper's critique of confirmation.) The main problem here is that some predictions are of things that we would expect to be true even if the theory were false. Examples: (1) theory: Jeanne Dixon can predict the future. Prediction: her predictions will turn out to be true. Problem: these predictions are so vague that a wide variety of events can be interpreted as making them true. So we would expect them to be true even if she can't predict the future. (2) Theory: this homeopathic remedy can cure the common cold. Prediction: if I take it, I will get better in a few days. Problem: most colds clear up in a few days regardless of whether we take anything for them.
D. Falsificationism (Popper)
Popper's response to the problems with the instantial and H-D models is essentially to deny that there is any such thing as the confirmation of a theory by data. He suggests that theories can be falsified, but can never be confirmed. (However, he then introduces the notion of "corroboration" -- is this really different from confirmation?)
A few quick observations follow. For more detail, see the handouts on probability theory and (especially) on Bayesian views in the philosophy of science (in PDF format). You might also find my web overview helpful (it covers pretty much the same ground as the second pdf).
Bayes's rule seems to explain what is correct about the H-D model (and also about falsificationism), while extending the account so as to allow for probabilistic predictions and also to give a way of quantifying the degree of support for a hypothesis offered by a particular piece of evidence.
Bayes's rule says that P(H|E) = ( P(H) * P(E|H) ) / P(E). (And recall that the denominator, P(E), can be expanded into P(E|T) * P(T) + P(E|~T) * P(~T).) Two points about this formula are particularly worth noting. First, the higher P(E|H) is, the better E will confirm H. That is, the more likely the hypothesis, if true, would make the evidence, the better the evidence confirms the hypothesis. (In the special case of predictions deductively implied by a theory, of course, P(E|H) will be 1. In the opposite special case of falsification, when the hypothesis deductively implies the negation of E, P(E|H) will be 0, and hence the posterior probability will be 0, consistent with Popper's observation that a theory that predicts something that doesn't happen has been falsified.)
Second, the higher P(E) is, the less E confirms H. (In particular, given that in the expanded version the only item that can't already be determined from the numerator is P(E|~T), the higher P(E|~T) is, i.e. the more likely it is that the evidence is true given that the hypothesis is false, the less E does to confirm H.)
The Bayesian approach clearly offers solutions to some of the problems with the other models. Unlike the instantial model and the H-D model, it offers an explicit account of the degree to which a theory is confirmed by observational evidence. And it also offers a possible response to the ravens paradox. However, it's not clear that the Bayesian approach helps with Goodman's puzzle about "grue" and similar predicates, or more generally with the problem of alternative hypotheses with the same observational consequences. (This is closely connected to Glymour's observation that Bayesianism does not give a reason to prefer simple theories to "deoccamized" theories.)
F. Kitcher's approach: "eliminative induction"
Kitcher offers a rather different approach to the relation between evidence and theory in his "Experimental Philosophy" chapter. In very general terms, his thought is that we confirm a theory or hypothesis by ruling out alternatives.
In his discussion of induction and the ravens paradox (sections 4-5), he offers a more specific version of this general idea. Suppose I want to know whether all A's are B's: specifically, let's say, whether all ravens are black. Kitcher thinks that we identify the characteristics of ravens that we know from prior research might have an effect on color: perhaps sex, maturity, climate, and vegetation. We look for examples of ravens that have every possible combination of these factors. Then we try to find ravens with every combination of these characteristics. Every time we find a black raven with a new combination of these factors, we have ruled out ("eliminated") another potential counterexample.
The general idea of eliminating alternatives is addressed in a more general way in the sections leading up to his discussion of Darwin (sections 6-8). In situations of underdetermination, where evidence plus deductive logic do not determine a unique theory, we can nevertheless try to eliminate some of the alternatives by showing that they are unreasonable (Kitcher tries to cash this out in terms of a kind of cost-benefit analysis: we could hang on to a theory despite apparently conflicting evidence, but the cost of doing so would be too high). Kitcher tries to show that Darwin essentially takes this approach in responding to creationism, trying to show that despite the fact that creationism is a logically possible view, it is not reasonable to hold onto it in the face of the available evidence.