We have considered logic both as its own sub-discipline of mathematics, and as a means to help us better understand and write proofs. In either view, we noticed that mathematical statements have a particular logical form, and analyzing that form can help make sense of the statement.

At the most basic level, a statement might combine simpler statements using *logical connectives*. We often make use of variables, and *quantify* over those variables. How to resolve the truth or falsity of a statement based on these connectives and quantifiers is what logic is all about. From this, we can decide whether two statements are logically equivalent or if one or more statements (logically) imply another.

When writing proofs (in any area of mathematics) our goal is to explain why a mathematical statement is true. Thus it is vital that our argument implies the truth of the statement. To be sure of this, we first must know what it means for the statement to be true, as well as ensure that the statements that make up the proof correctly imply the conclusion. A firm understanding of logic is required to check whether a proof is correct.

There is, however, another reason that understanding logic can be helpful. Understanding the logical structure of a statement often gives clues as how to write a proof of the statement.

This is not to say that writing proofs is always straight forward. Consider again the *Goldbach conjecture*:

Every even number greater than 2 can be written as the sum of two primes.

We are not going to try to prove the statement here, but we can at least say what a proof might look like, based on the logical form of the statement. Perhaps we should write the statement in an equivalent way which better highlights the quantifiers and connectives:

For all integers (n ext{,}) if (n) is even and greater than 2, then there exists integers (p) and (q) such that (p) and (q) are prime and (n = p+q ext{.})

What would a direct proof look like? Since the statement starts with a universal quantifier, we would start by, “Let (n) be an arbitrary integer.” The rest of the statement is an implication. In a direct proof we assume the “if” part, so the next line would be, “Assume (n) is greater than 2 and is even.” I have no idea what comes next, but eventually, we would need to find two prime numbers (p) and (q) (depending on (n)) and explain how we know that (n = p+q ext{.})

Or maybe we try a proof by contradiction. To do this, we first assume the negation of the statement we want to prove. What is the negation? From what we have studied we should be able to see that it is,

There is an integer (n) such that (n) is even and greater than (2 ext{,}) but for all integers (p) and (q ext{,}) either (p) or (q) is not prime or (n e p+q ext{.})

Could this statement be true? A proof by contradiction would start by assuming it was and eventually conclude with a contradiction, proving that our assumption of truth was incorrect. And if you can find such a contradiction, you will have proved the most famous open problem in mathematics. Good luck.

## Chapter Review

## 1

Complete a truth table for the statement ( eg P imp (Q wedge R) ext{.})

- Solution
(P) (Q) (R) ( eg P imp (Q wedge R)) T T T T T T F T T F T T T F F T F T T T F T F F F F T F F F F F

## 4. Proofs

Given that we can test an argument for validity, it might seem that we have a fully developed system to study arguments. However, there is a significant practical difficulty with our semantic method of checking arguments using truth tables (you may have already noted what this practical difficulty is, when you did problems 1e and 2e of chapter 3). Consider the following argument:

Alison will go to the party.

If Alison will go to the party, then Beatrice will.

If Beatrice will go to the party, then Cathy will.

If Cathy will go to the party, then Diane will.

If Diane will go to the party, then Elizabeth will.

If Elizabeth will go to the party, then Fran will.

If Fran will go to the party, then Giada will.

If Giada will go to the party, then Hilary will.

If Hillary will go to the party, then Io will.

If Io will go to the party, then Julie will.

Julie will go to the party.

Most of us will agree that this argument is valid. It has a rather simple form, in which one sentence is related to the previous sentence, so that we can see the conclusion follows from the premises. Without bothering to make a translation key, we can see the argument has the following form.

However, if we are going to check this argument, then the truth table will require 1024 rows! This follows directly from our observation that for arguments or sentences composed of n atomic sentences, the truth table will require 2 n rows. This argument contains 10 atomic sentences. A truth table checking its validity must have 2 10 rows, and 2 10 =1024. Furthermore, it would be trivial to extend the argument for another, say, ten steps, but then the truth table that we make would require more than a million rows!

For this reason, and for several others (which become evident later, when we consider more advanced logic), it is very valuable to develop a syntactic proof method. That is, a way to check proofs not using a truth table, but rather using rules of syntax.

Here is the idea that we will pursue. A valid argument is an argument such that, necessarily, if the premises are true, then the conclusion is true. We will start just with our premises. We will set aside the conclusion, only to remember it as a goal. Then, we will aim to find a reliable way to introduce another sentence into the argument, with the special property that, if the premises are true, then this single additional sentence to the argument must also be true. If we could find a method to do that, and if after repeated applications of this method we were able to write down our conclusion, then we would know that, necessarily, if our premises are true then the conclusion is true.

The idea is more clear when we demonstrate it. The method for introducing new sentences will be called “inference rules”. We introduce our first inference rules for the conditional. Remember the truth table for the conditional:

Φ | Ψ | (Φ→Ψ) |
---|---|---|

T | T | T |

T | F | F |

F | T | T |

F | F | T |

Look at this for a moment. If we have a conditional like (P→Q) (looking at the truth table above, remember that this would meant that we let Φ be P and Ψ be Q ), do we know whether any other sentence is true? From (P →Q) alone we do not. Even if (P→Q) is true, P could be false or Q could be false. But what if we have some additional information? Suppose we have as premises both (P →Q) and P . Then, we would know that if those premises were true, Q must be true. We have already checked this with a truth table.

premise | premise | |||

P | Q | (P→Q) | P | Q |

T | T | T | T | T |

T | F | F | T | F |

F | T | T | F | T |

F | F | T | F | F |

The first row of the truth table is the only row where all of the premises are true and for it, we find that Q is true. This, of course, generalizes to any conditional. That is, we have that:

premise | premise | |||

Φ | Ψ | (Φ→Ψ) | Φ | Ψ |

T | T | T | T | T |

T | F | F | T | F |

F | T | T | F | T |

F | F | T | F | F |

We now capture this insight not using a truth table, but by introducing a rule. The rule we will write out like this:

This is a syntactic rule. It is saying that whenever we have written down a formula in our language that has the shape of the first row (that is, whenever we have a conditional), and whenever we also have written down a formula that has the shape in the second row (that is, whenever we also have written down the antecedent of the conditional), then go ahead, whenever you like, and write down a formula like that in the third row (the consequent of the conditional). The rule talks about the shape of the formulas, not their meaning. But of course we justified the rule by looking at the meanings.

We describe this by saying that the third line is “derived” from the earlier two lines using the inference rule.

This inference rule is old. We are, therefore, stuck with its well-established, but not very enlightening, name: “modus ponens”. Thus, we say, for the above example, that the third line is derived from the earlier two lines using modus ponens.

## 3.1. Propositions¶

Symbolic logic manipulates *propositions*, which are assertions – declarative statements that can be understood as “true” (it’s a fact) or “false” (it is not a fact).

Examples of propositions from algebra are

The third proposition is always understood as false, whereas the first two might be true or false, depending on the values of x and y .

Examples of propositions written in English are

In English, we can also write sentences that are not propositions: “Will it rain tomorrow?” is a question and not a true-false proposition. We will always stay within algebra and form true-false propositions from arithmetic operators like + and / and comparison operators like == and > . The operators, ∧ (AND), V (OR), → (IMPLY), ¬ (NOT), are called *propositional connectives* because they connect together propositions to make new propositions. (Example: (x > 0 ∨ x < 0) → ¬ (2x = 0) is a proposition that connects together x > 0 , x < 0 , and 2x = 0 with ¬ , ∨ , and → .)

Later we will study FORALL (∀) and EXIST (∃), which are more delicate than the propositional connectives and are called *quantifiers*.

## Symbolic Logic 5E: 3.2, I

“For each of the following arguments, state the Rule of Inference by which its conclusion follows from its premiss”

- Commutation.
- Material Implication.
- Transposition.
- De Morgan’s Theorem.
- Tautology.
- Association.
- Exportation.
- Material Equivalence.
- Distribution.
- Commutation
- De Morgan’s Theorem.
- Exportation.
- Association.
- Material Equivalence.
- Distribution.
- Double Negation.
- Material Implication.
- Transposition.
- Exportation.
- Exportation.

## Proofs using Modus Pollens, Modus Tollens

Question:

using the four rules of inference presented (mp,mt,ds and hs), construct a proof for the following valid argument in the answer box below.

p. premise

3. w. /c. premise/conclusion

B).

1. H>(D<>A). Premise

2. Mv(R>M). Premise

3. RvH. Premise

4.

D). Premise

2. Q>M. Premise

3. M>

N)>T . premise

2. G>(NvE) . premise

3. (

E)>T . Premise

5. (NvE)> . Premise/conclusion

F.

1. (D>C)>(NvW) . premise

2. D>S. Premise

3. S>C. Premise

4.

I.

1. Cv(H>R) . Premise

2. Sv(R>E). Premise

3.

© BrainMass Inc. brainmass.com March 5, 2021, 1:49 am ad1c9bdddf

https://brainmass.com/math/logic/proofs-using-modus-pollens-modus-tollens-626379

#### Solution Preview

Format of proof is sequence of "Step [Reason]" statements.

p. premise

3. w. /c. premise/conclusion

Proof:

1. w > (pvc) [Premise]

2. w [Premise]

3. (p v c) [Modus ponens on 1 and 2]

4.

p [Premise]

5. c [Disjunctive syllogism of 3 and 4 Conclusion]

B).

1. H>(D<>A). Premise

2. Mv(R>M). Premise

3. RvH. Premise

4.

Proof:

1. M v (R > M) [Premise]

2.

M [Premise]

3. (R > M) [Disjunctive syllogism of 1 and 2]

4.

R [Modus tollens of 2 and 3]

5. R v H [Premise]

6. H [Disjunctive syllogism on 4 and 5]

7. H > (D <> A) [Premise]

8. (D <> A) [Modus ponens on 6 and 7 Conclusion]

## Truth Tables

Truth tables exhibit all the truth-values that it is possible for a given statement or set of statements to have. What that means is that whether we know, for any given statement, that it is true or false does not get in the way of us knowing some other things about it in relation to certain other statements. Wasn’t that a helpful clarification?

But let’s back up just a bit.

In the previous section we introduced the truth-functional definitions of the operators. With that information (exhibited on truth-tables, which showed all the possible values “p” and “q” could have), we have enough information that we can “calculate” or figure out the truth-value of compound statements as long as we know the truth-values of the simple statements that make them up.

For instance, since we know that

Bananas are fruit is true

and Apples are fruit is true

and Pears are fruit is true,

we can figure out that this statement:

How? List the truth values under the letters, and then combine the values according to the definitions of the five operators, starting at the smallest unit and working up to the largest.

This table shows us the values of these three statements. Each is true, so we have a “T” under each statement and since the negation of “Pears are fruit” occurs (“Pears are not fruit”), we have an “F” under the tilde.

The simplest or smallest level at which any “calculation” can be done is that negation of a simple statement. The next level is in the conjoining of the negated statement with “Apples are fruit.” The claim that “Apples are fruit but Pears are not” is false, so an “F” goes under the dot.

That dot statement is the consequent of the conditional, and the antecedent of the conditional is true, so the conditional itself is false an “F” goes under the horseshoe. I’ve colored it red to make it more noticeable.

Now, here in Drupal, the only way to get these symbols to line up straight is to present them in a table. But the table showing us that B ⊃ (A ∙

P) is false is not what we’ll call a “Truth Table.” A **truth table shows all the possible truth values that the simple statements in a compound or set of compounds can have**, and it shows us a result of those values. The example we are looking at is calculating the value of a single compound statement, not exhibiting all the possibilities that the form of this statement allows for.

The tables we used to define the operators, repeated below, are truth tables. There are no combinations of truth values for these statements that have not been shown.

p | ∙ | q |

T | T | T |

T | F | F |

F | F | T |

F | F | F |

p | v | q |

T | T | T |

T | T | F |

F | T | T |

F | F | F |

p | ⊃ | q |

T | T | T |

T | F | F |

F | T | T |

F | T | F |

p | ≡ | q |

T | T | T |

T | F | F |

F | F | T |

F | T | F |

Truth tables provide the truth-functional definition of the five operators. With those definitions, we can calculate the truth value of compound statements once we know the truth values of the simple ones that make them up. Here are some examples, and some exercises you can practice at. Use your extensive knowledge to get started by assigning the appropriate truth values to the simple statements.

**Newt will speak at Mary Washington and at Liberty University.**

Each of these simple statements is true, so the conjunction made up of them (M ∙ L) is also true.

**If Liberty and Mary Washington both invite Newt, and Liberty is a evangelical university, then Mary Washington must be too.**

This conditional has a conjunction for its antecedent, and that conjunction is itself a conjunction:

M= Mary Washington invites Newt

W= Mary Washington is evangelical.

((L | ∙ | M) | ∙ | E) | ⊃ | W |

T | T | T | T | T | F | F |

I’ve color-coded the truth values: the first one we can enter is the green one, because L ∙ M is the smallest unit the second one is the blueone, which combines the value from L ∙ M with the value of E. The last is the red one, which takes the true antecedent and the false consequent, yielding a false conditional statement. (I hope you did not think this was an argument rather than a statement.) By the way, don’t infer from this example that the first value you can calculate will always be the left-most one. The last value you can do is the one for what’s called the main operator: this statement is a conditional, its main operator is the horseshoe.

Here are some more you can practice with:

1. Obama and Hillary are Democrats if Newt is a Republican.

2. Either Obama will run or Newt is a Democrat.

3. If Clinton runs then she is at least 35 years old.

4. Obama is commander in chief if and only if he is President.

5. Being born in America is a necessary condition for being president.

6. If being born here is a necessary condition for running, then the Governator cannot run.

7. Either Hume did not invent truth tables or else if Wittgenstein wrote the Tractatus, then Russell’s paradox was bad news to Frege however, Kant denied that “existence” was a predicate if and only if Aristotelian logic dominated for two thousand years.

That one’s fun. Let’s play around with it (once you’ve got the truth values of the simple statements straight)

(Hume did not invent truth-tables, Wittgenstein did, and he wrote the Tractatus too. Russell’s paradox was very bad news to Frege (and not only to him!). It is to Kant that we owe the insight to not treat “existence” as a predicate, and of course Aristotlean logic did dominate Western philosophy for two thousand years –until modern symbolic logic was developed by people like Frege,Russell and Wittgenstein.)

8. If either Hume did not invent truth tables or Wittgenstein wrote the Tractatus, then Russell’s paradox was bad news to Frege but Kant denied that “existence” was a predicate only if Aristotelian logic dominated for two thousand years.

9. Either Hume did not invent truth tables or else Wittgenstein wrote the Tractatus, and Russell’s paradox was bad news to Frege only if Kant denied that “existence” was a predicate, given that Aristotelian logic dominated for two thousand years.

10. If it is false both that Hume invented truth tables and that Kant denied “existence” was a predicate, then given that Aristotelian logic dominated for two thousand years, Wittgenstein’s writing the Tractatus implies that Russell’s paradox was bad news to Frege.

So that’s one thing we do by applying truth tables: **calculate truth-values of compound statements**, given that we know the truth-values of the simple statements they are made up of.

Another application of truth tables allows us to **classify every truth-functional statement** as falling in to one of three categories. As you will have noticed, the truth-values of the simple statements in 8-10 did not change, but the truth-values of the main operators did. That’s because they are the kinds of statements in which the truth-values of the statements are said to matter, and to determine the truth-value of the compound, given the meanings of the operators. That kind of statement is called “contingent,” which means in this context that the value of the whole is dependent (contingent) upon the value of the parts. (There is another philosophical sense of “contingency,” of an existential nature, which has nothing to do with this logical notion of it.)

But there are also statements which have their truth-value as a consequence of their structure rather than as a consequence of their content. Some statements are true because their *structure* makes them be true, and it doesn’t matter whether they are about Wittgenstein or Leonardo or Humpty Dumpty. They are called “tautologies.” And then there’s a third group, the ones that are false as a result of their structure, and which, again, can’t be anything other than false regardless of what content you give them. These are called “self-contradictions.”

A simple example of a self-contradiction is this: “I think you’re right but I think you’re wrong.” The form of this is R ∙

R. It’s pretty clear intuitively that there’s something wrong with that claim, i.e., that it is false.

As that table makes plain, there’s always going to be an F under the dot in a conjunction that joins a statement with its own negation.

*Every class is either a member of itself or not.*

Every statement disjoined from its negation will be true. “Either a statement is true or false” is a tautology too (since “false” and “not true” are synonyms).

I’ve presented it in very simple examples, but here are some more challenging cases you can work on, to practice calculating, and get accustomed to classifying statements into contingencies, tautologies and self-contradictions:

Before you can do these, you’ll have to refresh yourself on how many rows a truth-table requires. The formula is “Number of rows = 2 to the nth power” where “n” is the number of simple statements. This means that if there is just one simple statement, only two rows are needed (one row shows what happens when it is true, the other shows what happens when it’s false). So # 1 below requires just two rows. A statement with two simple statements requires 4, one with three requires 8, one with four requires 16, one with five requires 32, one with six requires 64.

Besides doing the truth tables for these, make sure you can put them into words: 1 and 2 don’t say the same thing, for example, but what do they say? 1 says “M implies that M implies M.” (It can also be read as “If M is true then M implies M.” Or as “If M, then if M then M.” What does 2 say?

Translate this one and do a table.

*The balance of payments will decrease if and only if interest rates remain steady however it is not the case that either interest rates will not remain steady or that the balance of payments will decrease.*

Now that you know how to calculate values and how to build truth tables, you can apply it to another task, which is to **compare statements to other statements**. When you have a set of compound and/ or simple statements, you can make a table up that shows all the possibilities of their truth-values, and judge from that whether any two or more of them are equivalent to each other (like the triple bar statement and a biconditional), or whether they contradict each other (which is not the same thing as a statement contradicting *itself*), or whether they areconsistent or **inconsistent** with one another. These features can be read off a truth table mechanically. If two or more statements always have the same truth-value under their main operators, they are equivalent. If they have opposite values under their main operators, they are contradictory (like A and O in categorical logic). **If they never show a True on the same line, they are inconsistent (meaning they cannot both be true)**, and if they show a True on at least one line under their main operator, they are consistent with each other (meaning that under certain contingent circumstances, they can both be true).

Here are a few examples you can try this out on. For each pair of statements, make a truth table for each of the expressions,and then compare them line by line under their main operators. See what you find.

## How Abstract Mathematical Logic Can Help Us in Real Life

The internet is a rich and endless source of ﬂawed arguments. There has been an alarming gradual increase in non-experts dismissing expert consensus as elite conspiracy, as with climate science and vaccinations. Just because a lot of people agree about something doesn’t mean there is a conspiracy. Many people agree that Roger Federer won Wimbledon in 2017. In fact, probably everyone who is aware of it agrees. This doesn’t mean it’s a conspiracy: it means there are very clear rules for how to win Wimbledon, and many, many people could all watch him do it and verify that he did in fact win, according to the rules.

The trouble with science and mathematics in this regard is that the rules are harder to understand, so it is more difﬁcult for non-experts to verify that the rules have been followed. But this lack of understanding goes back to a much more basic level: different uses of the word “theory”. In some uses, a “theory” is just a proposed explanation for something. In science, a “theory” is an explanation that is rigorously tested according to a clear framework, and deemed to be statistically highly likely to be correct. (More accurately, it is deemed statistically unlikely that the outcome would occur without the explanation being correct.)

In mathematics, though, a “theory” is a set of results that has been proved to be true according to logic. There is no probability involved, no evidence required, and no doubt. The doubt and questions come in when we ask how this theory models the world around us, but the results that are true inside this theory must logically be true, and mathematicians can all agree on it. If they doubt it, they have to ﬁnd an error in the proof it is not acceptable just to shout about it.

It is a noticeable feature of mathematics that mathematicians are surprisingly good at agreeing about what is and isn’t true. We have open questions, where we don’t know the answer yet, but mathematics from 2,000 years ago is still considered true and indeed is still taught. This is different from science, which is continually being reﬁned and updated. I’m not sure that much science from 2,000 years ago is still taught, except in a history of science class. The basic reason is that the framework for showing that something is true in mathematics is logical proof, and the framework is clear enough for mathematicians to agree on it. It doesn’t mean a conspiracy is afoot.

Mathematics is, of course, not life, and logical proofs don’t quite work in real life. This is because real life has much more nuance and uncertainty than the mathematical world. The mathematical world has been set up speciﬁcally to eliminate that uncertainty, but we can’t just ignore that aspect of real life. Or rather, it’s there whether we ignore it or not.

Thus arguments to back something up in real life aren’t as clean as mathematical proofs, and that is one obvious source of disagreements. However, logical arguments should have a lot in common with proofs, even if they’re not quite as clear cut. Some of the disagreement around arguments in real life is unavoidable, as it stems from genuine uncertainty about the world. But some of the disagreement is avoidable, and we can avoid it by using logic. That is the part we are going to focus on.

Mathematical proofs are usually much longer and more complex than typical arguments in normal life. One of the problems with arguments in normal life is that they often happen rather quickly and there is no time to build up a complex argument. Even if there were time, attention spans have become notoriously short. If you don’t get to the point in one momentous revelation, it is likely that many people won’t follow.

By contrast a single proof in math might take 10 pages to write out, and a year to construct. In fact, the one I’m working on now has been 11 years in the planning, and has surpassed 200 pages in my notes. As a mathematician I am very well practiced at planning long and complex proofs.

A 200-page argument is almost certainly too long for arguments in daily life (although it’s probably not that unusual for legal rulings). However, 280 characters is rather too short. Solving problems in daily life is not simple, and we shouldn’t expect to be able to do so in arguments of one or two sentences, or by straightforward use of intuition. I will argue that the ability to build up, communicate and follow complex logical arguments is an important skill of an intelligently rational human. Doing mathematical proofs is like when athletes train at very high altitude, so that when they come back to normal air pressure things feel much easier. But instead of training our bodies physically, we are training our minds logically, and that happens in the abstract world.

Most real objects do not behave according to logic. I don’t. You don’t. My computer certainly doesn’t. If you give a child a cookie and another cookie, how many cookies will they have? Possibly none, as they will have eaten them.

This is why in mathematics we forget some details about the situation in order to get into a place where logic does work perfectly. So instead of thinking about one cookie and another cookie, we think about one plus one, forgetting the “cookie” aspect. The result of one plus one is then applicable to cookies, as long as we are careful about the ways in which cookies do and don’t behave according to logic.

Logic is a process of constructing arguments by careful deduction. We can try to do this in normal life with varying results, because things in normal life are logical to different extents. I would argue that nothing in normal life is truly entirely logical. Later we will explore how things fail to be logical: because of emotions, or because there is too much data for us to process, or because too much data is missing, or because there is an element of randomness.

So in order to study anything logically we have to forget the pesky details that prevent things from behaving logically. In the case of the child and the cookies, if they are allowed to eat the cookies, then the situation will not behave entirely logically. So we impose the condition that they are not allowed to eat the cookies, in which case those objects might as well not be cookies, but anything inedible as long as it is separated into discrete chunks. These are just “things”, with no distinguishable characteristics. This is what the number 1 is: it is the idea of a clearly distinguishable “thing”.

This move has taken us from the real world of objects to the abstract world of ideas. What does this gain us?

The advantage of making the move into the abstract world is that we are now in a place where everything behaves logically. If I add one and one under exactly the same conditions in the abstract world repeatedly, I will always get 2. (I can change the conditions and get the answer as something else instead, but then I’ll always get the same answer with those new conditions too.)

They say that insanity is doing the same thing over and over again and expecting something different to happen. I say that logic (or at least part of it) is doing the same thing over and over again and expecting the same thing to happen. Where my computer is concerned, it is this that causes me some insanity. I do the same thing every day and then periodically my computer refuses to connect to the wiﬁ. My computer is not logical.

A powerful aspect of abstraction is that many different situations become the same when you forget some details. I could consider one apple and another apple, or one bear and another bear, or one opera singer and another opera singer, and all of those situations would become “1 þ 1” in the abstract world. Once we discover that different things are somehow the same, we can study them at the same time, which is much more efﬁcient. That is, we can study the parts they have in common, and then look at the ways in which they’re different separately.

We get to ﬁnd many relationships between different situations, possibly unexpectedly. For example, I have found a relationship between a Bach prelude for the piano and the way we might braid our hair. Finding relationships between different situations helps us understand them from different points of view, but it is also fundamentally a unifying act. We can emphasize differences, or we can emphasize similarities. I am drawn to ﬁnding similarities between things, both in mathematics and in life. Mathematics is a framework for ﬁnding similarities between different parts of science, and my research ﬁeld, category theory, is a framework for ﬁnding similarities between different parts of math.

When we look for similarities between things we often have to discard more and more layers of outer details, until we get to the deep structures that are holding things together. This is just like the fact that we humans don’t look extremely alike on the surface, but if we strip ourselves all the way down to our skeletons we are all pretty much the same. Shedding outer layers, or boiling an argument down to its essence, can help us understand what we think and in particular can help us understand why we disagree with other people.

A particularly helpful feature of the abstract world is that everything exists as soon as you think of it. If you have an idea and you want to play with it, you can play with it immediately. You don’t have to go and buy it (or beg your parents to buy it for you, or beg your grant-awarding agency to give you the money to buy it). I wish my dinner would exist as soon as I think of it. But my dinner isn’t abstract, so it doesn’t. More seriously, this means that we can do thought experiments with our ideas about the world, following the logical implications through to see what will happen, without having to do real and possibly impractical experiments to get those ideas.

Getting to the abstract, logical world is the ﬁrst step towards thinking logically. Granted, in normal life we might not need to go there quite so explicitly in order to think logically about the world around us, but the process is still there when we are trying to ﬁnd the logic in a situation.

A new system was recently introduced on the London Underground, where green markings were painted onto the platforms indicating where the doors would open. Passengers waiting for the train were instructed to stand outside the green areas, so that those disembarking the arriving train would have space to do so, instead of being faced with a wall of people trying to get on. The aim was to try and improve the ﬂow of people and reduce the terrible congestion, especially during the rush hour.

This sounds like a good idea to me, but it was met with outcry from some regular commuters. Apparently some people were upset that these markings spoilt the “competitive edge” they had gained through years of commuting and studying train doors to learn where they would open. They were upset that random tourists who had never been to London before would now have just as much chance of boarding the train ﬁrst.

This complaint was met with ridicule in return, but I thought it gave an interesting insight into one of the thorny aspects of afﬁrmative action: if we give particular help to some previously disadvantaged people, then some of the people who don’t get this help are likely to feel hard done by. They think it’s unfair that only those other people get help. Like the absurdly outraged commuters, they might well feel miffed that they are losing their “competitive edge” that they feel they have earned, and they think that everyone else should have to earn it as well.

This is not an explicitly mathematical example but this way of making analogies is the essence of mathematical thinking, where we focus on important features of a situation to clarify it, and to make connections with other situations. In fact, mathematics as a whole can be thought of as the theory of analogies. Finding analogies involves stripping away some details that we deem irrelevant for present considerations, and ﬁnding the ideas that are at the very heart making it tick. This is a process of abstraction, and is how we get to the abstract world where we can more easily and effectively apply logic and examine the logic in a situation.

To perform this abstraction well, we need to separate out the things that are inherent from the things that are coincidental. Logical explanations come from the deep and unchanging meanings of things, rather than from sequences of events or personal decisions and tastes. The inherentness means that we should not have to rely on context to understand something.

We will see that our normal use of language depends on context all the time, as the same words can mean different things in different contexts, just as “quite” can mean “very” or “not much.” In normal language people judge things not only by context but also relative to their own experiences logical explanations need to be independent of personal experiences.

Understanding what is inherent in a situation involves understanding why things are happening, in a very fundamental sense. It is very related to asking “why?”, repeatedly, like a small child, and not being satisﬁed with immediate and superﬁcial answers. We have to be very clear what we are talking about in the ﬁrst place. Logical arguments mostly come down to unpacking what things really mean, and in order to do that you have to understand what things mean very deeply. This can often seem like making an argument all about deﬁnitions. If you try having an argument about whether or not you exist, you’ll probably ﬁnd that the argument will quickly degenerate into an argument about what it means to “exist.” I usually ﬁnd that I might as well pick a deﬁnition that means I do exist, as that’s a more useful answer than saying “Nope, I don’t exist.”

I have already asserted the fact that nothing in the world actually behaves according to logic. So how can we use logic in the world around us? Mathematical arguments and justiﬁcations are unambiguous and robust, but we can’t use them to draw completely unambiguous conclusions about the world of humans. We can try to use logic to construct arguments about the real world, but no matter how unambiguously we build the argument, if we start with concepts that are ambiguous, there will be ambiguity in the result. We can use extremely secure building techniques, but if we use bricks made of polystyrene we’ll never get a very strong building.

However, understanding mathematical logic helps us understand ambiguity and disagreement. It helps us understand where the disagreement is coming from. It helps us understand whether it comes from different use of logic, or different building blocks. If two people are disagreeing about healthcare they might be disagreeing about whether or not everyone should have healthcare, or they might be disagreeing about the best way to provide everyone with healthcare. Those are two quite different types of disagreement.

If they are disagreeing about the latter, they could be using different criteria to evaluate the healthcare systems, for example cost to the government, cost to the individuals, coverage, or outcomes. Perhaps in one system average premiums have gone up but more people have access to insurance. Or it could be that they are using the same criteria but judging the systems differently against those same criteria: one way to evaluate cost to individuals is to look at premiums, but another way is to look at the amount they actually have to pay out of their own pockets for any treatment. And even focusing on premiums there are different ways to evaluate those: means, medians, or looking at the cost to the poorest portion of society.

If two people disagree about how to solve a problem, they might be disagreeing about what counts as a solution, or they might agree on what counts as a solution but disagree about how to reach it. I believe that understanding logic helps us understand how to clear up disagreements, by ﬁrst helping us understand where the root of the disagreement is.

*From* The Art of Logic in an Illogical World. *Used with permission of Basic Books. Copyright © 2018 by Eugenia Cheng.*

Is logic a science or an art? Of course, a logician would answer *Yes,* and here is why.

A **science** is a systematic study of some aspect of the natural world that seeks to discover laws (regularities, principles) by which God governs His creation. Whereas botany studies plants, astronomy studies the sky, and anatomy studies the body, logic studies *the mind as it reasons*, as it draws conclusions from other information. Logic as a science seeks to discover rules that distinguish good reasoning from poor reasoning, rules that are then simplified and systematized. These would include the rules for validity, of inference and replacement, and so on.

For example, logic as a science could study the apostle Paul’s reasoning in 1 Cor. 15, “If there is no resurrection of the dead, then Christ has not been raised… But Christ has been raised, and is therefore the first fruits from among the dead.” It then simplifies this into a standard pattern: If not R then not C, C, therefore R. This rule can be further simplified, named, and organized in relation to other rules of logic.

An **art** is a creative application of the principles of nature for the production of works of beauty, skill, and practical use. The visual arts apply their principles to the production of paintings, sculptures, and pottery. The literary arts produce poems and stories. The performing arts produce operas, plays, and ballets.

Logic is one of the seven *liberal* arts, which include the Trivium of grammar, logic, and rhetoric. These arts are the skills which are essential for a free person (*liberalis*, “worthy of a free person”) to take an active part in daily life, for the benefit of others. Specifically, logic as an art seeks to apply the principles of reasoning to analyze and create arguments, proofs, and other chains of reasoning.

Logic is the science and art of reasoning well. Logic as a science seeks to discover rules of reasoning logic as an art seeks to apply those rules to rational discourse.

## Symbolic Logic 5E: 3.1, IV

“Construct a formal proof of validity for each of the following arguments, using the abbreviations suggested”

- A ∨ ¬I
- D→I
- ¬A
- (¬D ∧ ¬I)→W …
*Therefore,*W - ¬I (3,1,DS)
- ¬D (5,2,MT)
- ¬D ∧ ¬I (6,5,CONJ)
- W (7,4,MP)

- S→P
- C→¬F
- I→F
- O→¬P
- O ∨ C …
*Therefore,*¬S ∨ ¬I - (O→¬P) ∧ (C→¬F) (4,2,CONJ)
- ¬P ∨ ¬F (6,5,CD)
- (I→F) ∧ (S→P) (3,1,CONJ)
- ¬S ∨ ¬I (8,7,DD)

- C→N
- N→I
- I→S
- (C→S)→(N→C)
- ¬C …
*Therefore,*¬N - C→I (1,2,HS)
- C→S (6,3,HS)
- N→C (7,4,MP)
- ¬N (5,8,MT)

- (¬K ∧ P)→(B ∨ R)
- ¬K→(B→D)
- K ∨ (R→E)
- ¬K ∧ P …
*Therefore,*D ∨ E - ¬K (4,SIMP)
- B→D (2,5,MP)
- R→E (3,5,DS)
- (B→D) ∧ (R→E) (6,7,CONJ)
- B ∨ R (1,4,MP)
- D ∨ E (8,9,CD)

- (A→B) ∧ (B→¬C)
- C→¬D
- B→E
- ¬D→F
- ¬E ∨ ¬F …
*Therefore,*¬A ∨ ¬C - (B→E) ∧ (¬D→F) (4,3,CONJ)
- (¬B ∨ ¬¬D) (5,6,DD)
- A→B (1,SIMP)
- (A→B) ∧ (C→¬D) (8,2,CONJ)
- ¬A ∨ ¬C (9,7,DD)

- (G ∨ H)→¬I
- I ∨ H
- (H ∨ ¬G)→J
- G …
*Therefore,*J ∨ ¬H - G ∨ H (4,ADD)
- ¬I (5,1,MP)
- H (6,2,DS)
- H ∨ ¬G (7,ADD)
- J (8,3,MP)
- J ∨ ¬H (9,ADD)

- (R→P) ∧ (¬P→M)
- (M→D) ∧ (D→R)
- (¬M ∨ ¬R)→(¬P ∨ ¬D)
- ¬M …
*Therefore,*¬R ∨ ¬M - ¬M ∨ ¬R (3,ADD)
- ¬R ∨ ¬M (5,COMM)

- V→F
- V ∨ (P→Q)
- M ∨ (R→C)
- M→F
- (¬V ∧ ¬M)→(R ∨ P)
- ¬F …
*Therefore,*C ∨ Q - ¬V (6,1,MT)
- ¬M (6,4,MT)
- ¬V ∧ ¬M (7,8,CONJ)
- R ∨ P (9,5,MP)
- P→Q (7,2,DS)
- R→C (8,3,DS)
- (R→C) ∧ (P→Q) (11,12,CONJ)
- C ∨ Q (10,13, CD)

- T ∨ (E→D)
- T→C
- (E→G)→(D→I)
- (¬T ∨ ¬C)→(D→G)
- ¬C
- ¬I ∨ ¬G …
*Therefore,*¬D ∨ ¬E - ¬T (2,5,MT)
- E→D (7,1,DS)
- ¬T ∨ ¬C (7,ADD)
- D→G (9,4,MP)
- E→G (8,10,HS)
- D→I (3,11,MP)
- (D→I) ∧ (E→G ) (12,11,CONJ)
- ¬D ∨ ¬E (13,6,DD)