Philosophy of Logic and Language

Week 6: Proof-Theoretic Consequence

Jonny McIntosh

OVERVIEW

Last week, we looked at some problems with the MODEL-THEORETIC account of LOGICAL CONSEQUENCE.

This week, we will look at one of the main alternatives to the model-theoretic account: the PROOF-THEORETIC account.

After setting out the idea, and its attractions, I'll look at the problem of LOGICAL RULES, where I'll talk about another solution to the problem of LOGICAL CONSTANTS.

I'll then look at a problem raised by Arthur Prior that is sometimes held to be devastating for proof-theoretic accounts: the problem of TONK.

PROOF-THEORETIC CONSEQUENCE

To a first approximation, the PROOF-THEORETIC account is the view that φ is a logical consequence of a set of premises Γ IFF there is a PROOF of φ from the members of Γ.

What is a proof? The rough idea is that φ can be derived from Γ by means of a series of applications of logical RULES.

Think of the introduction and elimination rules familiar to you from natural deduction:

  • ∧-Intro.

From φ and ψ, you can infer φ∧ψ.

  • ∧-Elim.

From φ∧ψ, you can infer either of φ and ψ.

The approach has various attractions. First, insofar as the logical rules or axioms can be specified formally, it is FORMAL.

Second, inasmuch as the various rules or axioms are intuitively compelling, it promises an account of logical consequence that doesn't OVERGENERATE.

Third, it also seems well-placed to avoid analogues of the CONCEPTUAL INADEQUACY objection pressed by Etchemendy against the model-theoretic account.

For instance, insofar as the rules or axioms are truth-preserving in all possible worlds, it captures the idea that logically valid arguments are necessarily truth-preserving.

And inasmuch as we can know a priori the rules are truth-preserving, it seems to capture the idea that we can know a priori that logically valid arguments are truth-preserving.

The proof-theoretic approach has various merits, then. And it was widely held in the early 20th century. But it fell out of favour in and around the 1930s.

The main reason for this was Gödel's first incompleteness theorem, which tells us that every consistent formal system of sufficient strength is INCOMPLETE, i.e. that

Γ ⊨S φ ⇏ Γ ⊢S φ.

This suggests that, for each such system, there is a logically valid argument whose conclusion cannot be derived from its premises in that system.

This in turn suggests that the proof-theoretic account UNDERGENERATES: that there are logically valid arguments whose conclusions cannot be derived from its premises.

But this is too quick. There are various responses the proponent of the proof-theoretic approach can make.

First, they can deny that, for each consistent formal system of sufficient strength, there is a logically valid argument whose conclusion cannot be derived from its premises in that system.

This will be the response of anyone who thinks that the sorts of formal systems Gödel's theorem concerns — involving second-order quantification — are not logical.

But even if they accept that, for each consistent formal system of sufficient strength, there is a logically valid argument whose conclusion cannot be derived from its premises in that system ...

... it does not follow that there is a logically valid argument whose conclusion cannot be derived from its premises in any system whatsoever.

To think otherwise is to fall prey to a simple SCOPE fallacy — to infer that ∃x∀y Rxy from ∀y∃x Rxy.

Second, then, proponents of the proof-theoretic approach can say that a conclusion φ is a logical consequence of a set of premises Γ IFF there is a proof of φ from the members of Γ in some system (of a certain sort) or other.

But this too faces its problems. First, there is the problem of demarcating the LOGICAL RULES. The other is a problem raised by Arthur Prior, the problem of TONK.

LOGICAL RULES

How, if at all, does the proof-theoretic account draw a distinction between our old pals, ARGUMENT 1 and ARGUMENT 4?

ARGUMENT 1

  1. Someone smokes and drinks
  2. So, someone smokes and someone drinks

ARGUMENT 4

  1. John is a bachelor
  2. So, John is not married

The conclusion of ARGUMENT 1 can be derived from the premise by applications of ∃-Elim., ∧-Elim., ∧-Intro., and ∃-Intro.

But the conclusion of ARGUMENT 4 can be derived from the premise by a single application of the following rule: from α is a bachelor, you can infer α is not married.

This latter rule is just as intuitively compelling as the others. So on what grounds, if any, can we distinguish between the two arguments?

The obvious solution is to say that the rules employed in deriving the conclusion of ARGUMENT 1 from its premise are rules governing the use of logical expressions ...

... while the rule employed in deriving the conclusion of ARGUMENT 4 from its premise is not a rule governing the use of a logical expression.

But now we are confronted with the problem of LOGICAL CONSTANTS: on what grounds, if any, are we to distinguish logical expressions (or constants) from the nonlogical ones?

Last week, we looked at the attempt to solve the problem of logical constants in terms of PERMUTATION INVARIANCE.

A different approach, one that fits well with the proof-theoretic picture, appeals to the notion of PURELY INFERENTIAL rules.

To see the idea, consider the introduction and elimination rules for '∧' again:

  • ∧-Intro.

From φ and ψ, you can infer φ ∧ ψ.

  • ∧-Elim.

From φ ∧ ψ, you can infer either of φ and ψ.

Plausibly, these rules characterise the meaning of '∧': in order to understand '∧', it is enough to know that it is governed by these rules.

Moreover, the rules are purely inferential, at least in the sense that they govern INFERENTIAL TRANSITIONS between thoughts (or sentences that express them).

Contrast them with the following introduction rule for the sentence, It is raining: if it is raining, one may infer It is raining.

This rule is not purely inferential. It does not govern inferential transitions between thoughts (or sentences that express them).

This suggests that we can characterise the logical expressions as those whose meaning can be characterised in terms of purely inferential rules.

It is not clear, however, how this is supposed to rule out expressions such as 'is a bachelor'. Consider the following rules:

  • bachelor-Intro.

From α is an unmarried man, you can infer α is a bachelor.

  • bachelor-Elim.

From α is a bachelor, you can infer α is an unmarried man.

It is plausible that, in order to understand 'is a bachelor', it is enough to know that it is governed by these rules.

Moreover, if governing inferential transitions is enough to make a rule purely inferential, these are as purely inferential as ∧-Intro. and ∧-Elim.

One option here is to insist that it is merely a necessary condition on a rule's being purely inferential that it govern inferential transitions.

In addition, perhaps, we might insist that a rule can only be purely inferential if every sign that appears in the formulation of the rule, apart from the one being characterised, is STRUCTURAL or SCHEMATIC.

Whereas the permutation invariance account draws on the idea that logic is TOPIC-NEUTRAL, and insensitive to the particular identities of objects ...

... this draws on the idea that logic is NORMATIVE for thinking as such, specifying rules for correct use that can be grasped by anyone who knows what it is to think or reason.

But even if something like this deals with ARGUMENT 4, it cannot be the whole story...

PROBLEM OF TONK

The problem with ARGUMENT 4 is that, intuitively, it is not logically valid, and it is not obvious how it can be classified as such on the proof-theoretic account.

But ARGUMENT 4 is, at least, truth-preserving. And perhaps it's not entirely out of the question that it is logically valid after all.

Arthur Prior famously raised a problem for which no such move is available for proponents of the proof-theoretic account — the problem of TONK.

To illustrate the problem, he asks us to consider the connective, tonk, characterised in terms of the following introduction and elimination rules:

  • tonk-Intro.

From φ, you can infer φ tonk ψ.

  • tonk-Elim.

From φ tonk ψ, you can infer φ.

By means of tonk, it seems we can show that the proof-theoretic account classifies as logically valid arguments which aren't even truth-preserving!

ARGUMENT 5

  1. Theresa May is the Prime Minister tonk 1 + 1 = 3
  2. So, 1 + 1 = 3

According to the proof-theoretic account, the conclusion of ARGUMENT 5 is a logical consequence of its premise IFF there is a proof of that conclusion from that premise in some system or other.

And the problem is that there is a proof of that conclusion from that premise in some system or other: namely, in systems that contain tonk-Intro. and tonk-Elim.!

(Notice, by the way, that tonk-Intro. and tonk-Elim. both appear to be as purely inferential as ∧-Intro. and ∧-Elim.)

(So even if an account of logical constants in terms of the notion of a purely inferential rule is workable, it doesn't seem to help here.)

One solution is to say that the conclusion of an argument is a logical consequence of a set of premises IFF there is a proof of that conclusion from that premise in some SOUND system or other.

But to say that a system is sound is just to say that a conclusion can be derived from a set of premises in that system only if that conclusion is a logical consequence of those premises.

This leads Etchemendy, among others, to despair that a proof-theoretic account of logical consequence either massively overgenerates or is hopelessly circular.

This is overly pessimistic. What the problem shows is that if logical consequence is to be identified with derivability in some system or other, we need some criterion of admissible systems.

But soundness is not the only criterion available to us. Here are two alternative criteria:

First, following Nuel Belnap, we can identify logical consequence with derivability in some or other conservative extension of our usual systems, where ...

... the addition of a connective to a system is CONSERVATIVE IFF every formula that can be proved in the new system, and that doesn't contain the connective, can also be proved in the old system.

Second, following Michael Dummett, we can identify logical consequence with derivability in some or other system in which the introduction and elimination rules for each constant are in harmony, where ...

... the introduction and elimination rules for a connective are in HARMONY IFF (roughly) the elimination rules do not allow us to derive anything more or less than is required for its introduction.

Think again of ∧-Intro. and ∧-Elim. again. What we can infer by eliminating ∧ is exactly what we need in order to introduce it.

Dag Prawitz offers a different response to the problem of tonk. It's similar to Dummett's harmony-based approach ...

... but avoids identifying logical consequence with derivability in a given system altogether. I'll just give a sketch of the basic ideas.

Prawitz offers a useful analogy. Consider the following expressions: '1', '8', '1456-345', 'the largest even number less than 10', 'the largest even number'.

Checking that certain of these expressions denote a natural number is trivial: they are in CANONICAL form — here, decimal notation.

For the other expressions, however, whether or not they denote a natural number depends on whether they can be TRANSFORMED into canonical form.

In some cases, they can. '1456-345' can be transformed into '1111', and 'the largest even number less than 10' can be transformed into '8'.

In one case, however, they can't: 'the largest even number'. In this case, the expression does not denote a natural number.

Similarly, according to Prawitz, certain arguments are TRIVIALLY logically valid. These are those that only employ introduction rules.

The reason these are trivially logically valid, according to Prawitz, is that introduction rules are SELF-JUSTIFYING.

In other words, it is part of the meaning of a logical expression that its introduction rule is logically valid.

Other arguments — those that employ elimination rules — are logically valid only if they can be TRANSFORMED into arguments that are trivially logically valid.

What about tonk? If introduction rules are self-justifying, doesn't it follow that arguments that only employ tonk-Intro. are logically valid?

Yes! According to Prawitz, what the case of tonk shows is that we cannot stipulate just any old elimination rules.

In order for an elimination rule to be justified, we have to be able to transform any argument that uses it into one that only uses introduction rules.

And the problem with tonk is that there is no way of transforming arguments that employ its elimination rule into such trivially logically valid arguments.

SUMMARY

This week, we've seen what the account is, its merits, and two problems: the problem of LOGICAL RULES and the problem of TONK.

I hope to say a little more next week about the story proponents of the proof-theoretic account can tell about the EPISTEMIC guarantee logically valid arguments provide.

But the main topic will be LOGICAL PLURALISM, the idea that there is more than one correct logic.