Alternative Systems of Propositional Logic
|1. Equivalent Systems of Propositional Logic|
Two formal systems are "equivalent" if all and only the derivations made possible by one are made possible by the other. That is, a set of rules P for propositional logic is equivalent to a different set P* if and only if, for any collection of premises P1, P2, . . . Pn and conclusion C, if you can derive C from P1, P2, . . . Pn using P then you can derive C from P1, P2, . . . Pn using P*, and if you can derive C from P1, P2, . . . Pn using P*, you can derive it using P.
How to prove that two formal systems are equivalent. Roughly, for each system, figure out what derivations are made possible by any rules it has that the other doesn't have and show that you can get the same results using only the rules the other has. For example: contradiction elimination allows you to derive Q (no matter what Q is!) from ⊥ . We can show that this can be done using only the remaining rules of our system: assume ¬Q; get ⊥ in the subproof by Reit; use ¬Intro to get ¬¬Q; and then use ¬Elim to get Q. Voila! So anything we can do with ⊥Elim we can also do using only the other rules of our formal system. (If we form a new system P* from a system P simply by eliminating one or more rules, then it is trivial to show that anything derivable in P* can be derived in P, since P has all P*'s rules and then some, so the only thing we need to worry about proving is that anything derivable in P can be derived in P*. However, if P* involves both deletions from and additions to P, then both directions will require proof.)
A little more generally and systematically: to show that P and P* are equivalent, we need to show two things.
(1) First, we need to show that P* can do anything P can do (i.e. that we have not lost any power as a result of our changes). So, for every rule Rdeleted that P has and P* does not have (i.e. for every rule we deleted from P in arriving at P*), we need to construct a proof, using only the rules of P*, that shows that we can get the same effect as Rdeleted.
(2) Second, we need to show that P can do anything P* can do (i.e. that we have not added any power as a result of our changes). So, for every rule Radded that P* has but P does not have (i.e. for every rule we added to P in arriving at P*), we need to construct a proof, using only the rules of P, that shows that we can get the same effect as Radded.
Some examples of equivalent formal systems. Using P to stand for the propositional portion of the formal system in Language, Proof, and Logic, the following are a few examples of equivalent systems. It would be a good exercise to prove these equivalences.
(1) P with P minus contradiction elimination.
(2) P with the system that results from replacing ∧Intro with a rule that lets you derive P ∧ Q from ¬(¬P ∨ ¬Q), and replacing ∧Elim with a rule that lets you derive ¬(¬P ∨ ¬Q) from P ∧ Q. (This amounts to the same thing as simply defining conjunction in terms of negation and disjunction. Another way to accomplish the same thing would be to delete the conjunction rules, and add as an axiom schema (P ∧ Q) ↔ ¬(¬P ∨ ¬Q).)
(3) the system that results from replacing →Elim with a rule that lets you derive ¬P from P → Q and ¬Q
(4) the system that results from replacing ∨Elim with a rule that lets you derive Q from P ∨ Q and ¬P (and P from P ∨ Q and ¬Q)
Notice that this one is a little more difficult to think about than the others so far. We need to prove that we can use P* to accomplish what ∨Elim accomplishes. But ∨Elim is unlike the other rules we have considered eliminating from P. The other rules tell us that if we have one or two sentences as premises or as lines derived from our premises, we can write down a further sentence. (∨Intro is a one-sentence case: if you have P, you can write down P ∨ Q. ∨Intro is a two-sentence case: if you have P and Q on separate lines, you can write down P ∧ Q.) So to prove that P* can accomplish what those rules accomplish, we simply construct a proof that has as premises the sentences the old rule required us to have as starting points, and as conclusion the sentence the old rule allowed us to derive.
∨Elim does not work like this, however. ∨Elim tells us that if we have P ∨ Q, a subproof from P to R, and a subproof from Q to R, then we can derive R. What exactly do we have to prove in P* to prove that it can accomplish the same thing?
Notice that a proof can contain a subproof from P to R if and only if the conditional P → R is derivable from the premises of the proof. (Left-to-right: if we have a subproof from P to R, then the →I rule lets us derive P → R. Right-to-left: if we have P → R, then we can construct a subproof from P to R as follows. Start the subproof with P as assumption. Then the next line of the subproof is R, by →E on P → R, which by assumption we have available, and P, which was the assumption of the subproof.) So in a sense being able to construct a subproof from P to R amounts to the same thing as being able to derive P → R from the premises of the proof.
So we can show that P* can do what ∨Elim does by constructing, in P*, a derivation of R from the three premises P ∨ Q, P → R, Q → R. (Assuming, that is, that P* has the rules →I and →E. I won't address the issue of how we'd prove the equivalence of a system that dropped →I.)
(5) the system that results from replacing ¬Elim with a rule allowing you to write any instance of the form P ∨ ¬P at any point in a proof (sort of like the =Intro rule allow you to write any instance of x=x at any point in a proof).
|2. Alternatives to FOL that are not equivalent to it|
A system of intuitionist propositional logic results from deleting ¬Elim from P and leaving everything else as is. (However, ¬Intro must be interpreted as allowing you only to add a negation sign to a formula you've derived a contradiction from, so that in an application of ¬Intro, you can't assume ¬P, derive a contradiction, and conclude P. Fitch lets you do this as a shortcut, and in classical logic that is fine, since P can always be derived from ¬¬P. However, in intuitionistic logic, P can't be derived from ¬¬P, so in intuitionistic logic the shortcut is illegitimate!). Although this may seem like a small change from classical propositional logic, the resulting differences are in fact very significant.
P can't be derived from ¬¬P (obviously)
P ∨ ¬P can't be derived from no premises (i.e. is no longer a theorem)
¬P ∨ Q can't be derived from P → Q
P ∧ Q can't be derived from ¬(¬P ∨ ¬Q) (You can derive ¬¬P ∧ ¬¬Q, but that's all)
It is important to notice that if the semantics of our language stays the same, then intuitionistic logic must be incomplete (since all the arguments listed above, which can't be derived in intuitionistic logic, are tautologically valid). So the intuitionist must have some other way of understanding the semantics of the formal language. Saul Kripke has offered a very interesting semantics for which intuitionistic logic can be proven to be sound and complete.
What is the motivation for this alteration in classical logic? The intuitionists were primarily interested in reasoning about mathematics. And they regarded mathematics as in some sense constructed rather than as a collection of truths about a realm independent of human activity. (That is, they rejected platonism about mathematics.) If mathematics is a human construction, then it seems that to say that a mathematical claim is true just amounts to saying that it is provable -- there doesn't seem to be any room for the idea that there could be truths that were forever unprovable. If they weren't provable, what could make them true in the first place?
But if that's so, then to say that P is true amounts to saying that it is provable. Similarly, to say that ¬P is true is to say that it is provable that it is not provable that P. We can already see from this why an intuitionist might reject excluded middle, that is, might reject the idea that it is a theorem that P ∨ ¬P. After all, on the view we are considering, excluded middle amounts to the claim for every mathematical proposition, we can either prove that it is true, or we can prove that we can't prove that it is true. But it seems entirely possible that there are mathematical sentences for which we can neither prove that they are true, nor prove that we can't prove that they are true.
Then ¬¬P will mean we can prove that it's not provable that it's not provable that P. But the intuitionist thinks the fact that we can't prove that we can't prove that P hardly suffices to show that P is true, i.e. that we can prove that P. It could easily be true both that we can't prove that P, and that we can't prove that we can't prove that P. If so, then negation elimination should be rejected as a rule of inference.
[Q: is this too complicated? Maybe it's OK to think of it a little more simply, like this: P --> provable that P is true; ~P --> provable that P is false; P v ~P --> provable that we can either prove that P is true or we can prove that it is false. Then ~~P will mean that we can prove that it is not provable that P is false. The intuitionist insists that the fact that we can't prove P false is not a good reason to think that it is in fact true!]
There are various reasons one might want more than two "truth" values.
(1) Truth-value gaps. (a) Sentences with false presuppositions. According to Strawson, sentences such as "The present king of France is bald" are neither true nor false; rather they have a presupposition that is false. One might want to treat these cases by having a third, "N" truth value for "neither true nor false". (b) Future contingents. Aristotle raised the possibility that sentences about the future are not currently either true or false.
(2) Truth-value gluts. To allow for the possibility that some sentences can be both true and false. This may seem like a strange thing to want to allow for, but it has been suggested as a response to certain paradoxes. Consider the sentence "This sentence is false" (a short version of the "liar" paradox). Suppose that it is true. Then, since it says it is false, it must be false. So we seem to have a proof that it is false. But if it is false, then what it says (namely that it is false) is true! So we also have a proof that it is true. One response to paradoxes like this is to add a third truth value, "B" let's say, for "both true and false."
(Note: the gaps/gluts terminology is borrowed from Graham Priest, An Introduction to Non-Classical Logic.)
(3) For epistemological reasons. Sometimes people want to add a third truth value for "unknown", so that the three truth values would be "known to be true," "known to be false," and "unknown." (However, in my view this is a terrible reason to abandon classical logic. It is better to keep standard two-valued logic, and treat our lack of information about which atomic sentences are actually true as ignorance of which row of the truth table corresponds to the facts.)
(4) To distinguish between different kinds of truth. (One suggestion is to distinguish between five truth values: necessarily true, contingently true, unknown, contingently false, necessarily false. Again, this seems to me like a bad idea. Instead of inventing a new truth value for "necessarily true," we can keep classical logic and identify necessary truths as those truths that are true in every row of the truth table.)
A good question for the final exam would be something like this: give one possible reason for using a many-valued logic. Then construct a truth table for P ∧ Q (or one of the other connectives), explaining your choices. One thing to keep in mind about many-valued truth tables is that, if you have m truth values and n atomic sentences, the truth table will need mn rows.
One way to think of fuzzy logic is as an extreme version of many-valued logic: we simply allow the "truth" value of a sentence to be any real number in the interval from 0 to 1, inclusive. So 0 is completely false, 1 is completely true, .8 is mostly true, etc.
Such values (call them "fuzzy values" rather than "truth values") are appropriate for many predicates that admit of degrees. For example, in Tarski's World, Small(x), Medium(x), and Large(x) are mutually exclusive and exhaustive categories. But in the real world things come in lots of sizes, and we might want to say that the fuzzy value of Small(karen) is .8 if Karen is 5'1", while the fuzzy value of Small(carol) is .6 if Carol is 5'3". (Well, I don't know whether these specific values are plausible, but you get the idea.)
Obviously we can't use truth tables to give the semantics of the logical connectives in fuzzy logic. There is a nondenumerable infinity of real numbers between 0 and 1, and we can't construct an infinitely long truth table! But we can instead give formulas for determining the fuzzy value of a compound sentence based on the fuzzy values of its components. For example, if F(P) is the fuzzy value of P, we might have:
= min(F(P), F(Q))
F(P ∨ Q) = max(F(P), F(Q))
F(¬P) = 1 - F(P)
F(P → Q) = 1 if F(P) ≤ F(Q)
1 - (F(P) - F(Q)) otherwise
Now, what is a valid argument in fuzzy logic? There are various possible answers; a fairly standard one is that an argument is valid iff,on every interpretation, the fuzzy value of the conclusion is greater than or equal to the fuzzy value of the conjunction of the premises. (Equivalently, given the above semantics for conjunction, an argument is valid iff, on every interpretation, the fuzzy value of the conclusion is greater than or equal to the minimum of the fuzzy values of the premises.) (In fuzzy propositional logic, as in classical propositional logic, an interpretation is just an assignment of truth values to atomic sentences -- although we can no longer use truth tables to get an exhaustive listing of all such interpretations!) Notice that if we adopt this definition of validity, then →Elim is no longer an acceptable rule. (Example: Let F(P) = .7, F(Q) = .6. Then F(P → Q) = 1 - (.7 - .6) = 1 - .1 = .9. So the premises P and P → Q have fuzzy values of .7 and .9, respectively; so their conjunction has the fuzzy value .7. But Q has fuzzy value .6, which is less than .7, so the inference from P and P → Q to Q is not valid according to the above definition.)