next up previous
Next: Models, Truth, Logical Implication, Up: Structures: Still More Notes Previous: Defining Structures


Things start to get a little ugly at this point, or at least a little complicated. But the notion of truth in a structure is extremely important. After all, we are ultimately interested in notions like validity and logical consequence, both of which have to do with truth. (For instance, a sentence $C$ is a logical consequence of premises $P_1, P_2, \ldots, P_3$ just in case $C$ is true in every structure in which $P_1, P_2, \ldots, P_3$ are all true.)

The basic idea of truth in a structure is clear enough. A structure is an interpretation or model of a language: it determines what individuals the constants refer to, and what functions and relations the function symbols and relation symbols express. (Keep in mind that functions and relations are just sets of n-tuples of individuals in the universe of the structure. So knowing what function or relation is expressed amounts to knowing what the value of the function is for every argument, and knowing which objects the relation relates. So, for instance, if we specify the relation that a two-place relation symbol Larger expresses, then this tells us, not that ``Larger'' has to do with size, but which objects in our universe are larger than which other objects.) Once we know what we are talking about, we should be able to determine whether what we are saying is true.

However, making the idea of truth in a structure clear and precise will require a blizzard of definitions. One thing that makes the matter more complex is that we will define truth not only for sentences, but also for formulas. Well, a formula is not true or false simpliciter. Consider (in the language of Tarski's World) the formula Larger(x,y). Both variables in this formula are free, not bound by a quantifier. So it doesn't make any sense to talk about the formula as being true or false: the variables do not refer to any object in particular (that's why they are called ``variables''), and are not bound by quantifiers, so the formula doesn't say anything determinate (it does not ``express a complete thought,'' as sentences are supposed to do). So how can we define truth in a structure for formulas? Only by relativizing it to an assignment of objects to variables. Suppose we have two objects: $a$, which is large, and $b$, which is small. Then we can say that `Larger($x, y$)' is true relative to the assignment $\{\langle x,a\rangle , \langle y,b\rangle \}$, but false relative to the assignments $\{\langle x,b\rangle , \langle y,a\rangle \}$, $\{\langle x, a\rangle , \langle y, a \rangle \}$, and $\{\langle x, b\rangle , \langle y, b\rangle \}$. (Usually we will express this by saying that the assignment just specified satisfies the formula.) This is the idea that the first few definitions will make precise.

So, here goes (Leary's Definition 1.7.1, p. 32):

Definition: If $\mathfrak{A}$ is an $\mathcal{L}$-structure, a variable assignment function into $\mathfrak{A}$ is a function $s$ that assigns to each variable an element of the universe $A$. So a variable assignment function into $\mathfrak{A}$ is any function with domain $Vars$ and codomain $A$.

Leary notes that variable assignment functions need not be injective or bijective. These terms are also used in problem 5 on p. 31. This is another case in which Leary assumes background that we may or may not have. A function is injective if it does not give the same value for more than one argument. Recall that no function gives more than one value for a given argument. Putting these two things together, we see that an injective function must be one-to-one: for each argument, there is only one value, and for each value, there is only one argument. A function is said to be surjective if it is ``onto,'' that is, if every item in the range (or ``codomain'') of the function is a value of the function for some argument. A function is said to be bijective if it is both injective and surjective, that is, both one-to-one and onto. So the point here is that variable assignment functions need not determine a different value for each argument, and also need not take every element of $A$ as its value for some argument.

So far we're just assigning an object to each distinct variable. We aren't yet considering how we will use this assignment to discuss truth of a formula relative to an assignment. Next we introduce a notation for taking a variable assignment function $s$ and changing it for a single variable. (This is Leary's Definition 1.7.2, p. 33.)

Definition: If $s$ is a variable assignment function into $\mathfrak{A}$ and $x$ is a variable and $a \in A$, then $s[x\vert a]$, called an $x$-modification of the assignment function $s$, is the variable assignment function into $\mathfrak{A}$ defined as follows:

s[x\vert v](v) = \left\{
s(v) & \textrm{if } v \not= x \\ a & \textrm{if } v = x

Consider the following assignment function $s$ (mentioned earlier): $\{\langle x,a\rangle , \langle y,b\rangle \}$. Now suppose we want to modify this function so that both variables are assigned $a$. We can write this modified variable assignment function like this: $s[x\vert b]$. Then $s(x) = a$, but $s[x\vert b](x) = b$. On the other hand, $s(y) = b$ and $s[x\vert b](y) = b$. Since the modification affects only the variable $x$, applying the modified variable assignment function to $y$ gives us the same results as applying the original variable assignment function to $y$.

We are gradually closing in on a definition of truth of a formula relative to an assignment. If we have a formula with some free variables, then assigning an element from the domain to each free variable will suffice to determine the interpretation of all the terms. There are three kinds of terms: constants, variables, and function symbols followed by terms. Constants already have an interpretation assigned by the structure itself, so they are taken care of. The variables are taken care of by the variable assignment function $s$. This leaves only the complex terms constructed out of function symbols and other terms. But once the interpretation of constants and variables is taken care of, the complex terms are taken care of also. Each term $t_i$ that follows a function symbol is a constant, variable, or complex term. If it is a variable or a constant, it is taken care of already. If it is a complex term, we look at its constituent terms. Recursively following this procedure, we eventually reach function symbols all of whose terms have interpretations. Then we read the interpretation of the function symbol followed by those terms off the structure. Then we return to the next higher level of our recursive process and proceed to examine the next term. Eventually we will have an interpretation for every term in our formula. [Note to self: clarify the description of this recursive procedure! Leary notes that there is actually a theorem, the aptly named ``Recursion Theorem," that establishes that an assignment of referents to variables suffices to determine the interpretation of every term, and hints that it's a little tricky. But it should be possible to describe the recursive process in a clear and detailed enough way that it is intuitively clear that we are guaranteed to be able to find an interpretation for every term.]

So we can take a variable assignment function, together with facts determined by the structure itself, and construct a more general term assignment function, as follows (Leary's Definition 1.7.3).

Definition: Suppose that \ensuremath{\mathfrak A} is an \ensuremath{\mathcal L}-structure and $s$ is a variable assignment function into \ensuremath{\mathfrak A}. The function $\bar{s}$, called the term assignment function generated by $s$, is the function with domain sonsisting of the set of \ensuremath{\mathcal L}-terms and codomain $A$ defined recursively as follows:

  1. If $t$ is a variable, $\bar{s}(t) = s(t)$.
  2. If $t$ is a constant symbol $c$, then $\bar{s}(t) = c^\ensuremath{\mathfrak A}$.
  3. If $t$ is $f t_1 t_2 \ldots t_n$, then $\bar{s}(t) = f^\ensuremath{\mathfrak A}(\bar{s}(t_1),\bar{s}(t_2), \ldots, \bar{s}(t_n))$.

Now we can (finally) define what it is for a formula to be true relative to a variable assignment function. The usual term for truth relative to an assignment is satisfaction: we say that an assignment $s$ satisfies a formula, or, in the language of the next definition, that a structure satisfies a formula with a particular assignment.

Definition: Suppose that \ensuremath{\mathfrak A} is an \ensuremath{\mathcal L}-structure, $\phi$ is an \ensuremath{\mathcal L}-formula, and $s$ : $Vars \to A$ is an assignment function. Then \ensuremath{\mathfrak A} satisfies $\phi$ with assignment $s$, and write $\ensuremath{\mathfrak A}\models \phi[s]$, in the following circumstances:

  1. If $\phi$ is $=t_1 t_2$ and $\bar{s}(t_1)$ is the same element of the universe $A$ as $\bar{s}(t_2)$, or
  2. If $\phi$ is $R t_1 t_2 \ldots t_n$ and $\langle \bar{s}(t_1), \bar{s}(t_2), \ldots, \bar{s}(t_n) \rangle \in R^\ensuremath{\mathfrak A}$, or
  3. If $\phi$ is $(\neg \alpha)$ and $\ensuremath{\mathfrak A}\not\models \alpha[s]$, (where $\not\models$ means ``does not satisfy"), or
  4. If $\phi$ is $(\alpha \vee \beta )$ and $\ensuremath{\mathfrak A}\models \alpha [s]$, or $\ensuremath{\mathfrak A}\models \beta[s]$ (or both), or
  5. If $\phi$ is $(\forall x)(\alpha)$ and, for each element $a$ of $A$, $\ensuremath{\mathfrak A}\models \alpha[s(x\vert a)]$.

If $\Gamma$ is a set of \ensuremath{\mathcal L}-formulas, we say that \ensuremath{\mathfrak A} satisfies $\Gamma$ with assignment $s$, and write $\ensuremath{\mathfrak A}\models \Gamma[s]$ if for eacy $\gamma \in \Gamma, \ensuremath{\mathfrak A}\models \gamma[s]$.

This definition is where we attach meanings to the logical symbols in \ensuremath{\mathcal L}. The structure gives the meaning of the constant symbols, function symbols, and relation symbols, while the definition of satisfaction above essentially gives the meaning of the symbols that are common to all first-order languages. These don't need to be given in the structure for the same reason that the logical symbols don't need to be listed when we specify a language: since they are common to all first-order languages, we can leave them out of the structures, which merely give the semantics of the symbols that can change from language to language.

Notice that the notion of an x-modification of a variable assignment function comes in handy in the final clause above. If we have a quantified sentence, we don't care what entity $s$ assigns to occurrences of the variable that are bound by its quantifier. For example, if we have the formula

\begin{displaymath}(\forall x)\textrm{Larger}(x,y)\end{displaymath}

and an assignment function

\begin{displaymath}s = \{\langle x, a \rangle, \langle y, b \rangle\}\end{displaymath}

and we want to know whether the assignment function satisfies the formula, then we need to appeal to the function $s$ to find how to interpret y, which is free, but the fact that $s$ assigns $a$ to $x$ is irrelevant, because the quantifier tells us that the whole formula is true only if the subformula after the quantifier is true for every assignment of an object to x. The notation $s[x\vert a]$ gives us a convenient way to consider assignments that keep all of the assignments of $s$ except its assignment to $x$. (In clause 5 above, `$s[x\vert a]$' is written `$s(x\vert a)$', i.e. with parentheses instead of square brackets, for readability: ` $\alpha[s(x\vert a)]$' is easier to read than ` $\alpha[s[x\vert a]]$'!)

next up previous
Next: Models, Truth, Logical Implication, Up: Structures: Still More Notes Previous: Defining Structures
cbrown 2002-01-30