Diego Bravo Estrada

Warning
This document is being built as a self-help reference for the author as part of his learning process. In no way should be considered authoritative.

Proof assistants are computational systems (software) which provides facilities to develop and verify the correction of mathematical proofs. Those facilities include a language for steering the construction of logical proofs, some degree of automated reasoning, specialized syntax for expressing mathematical concepts, assertions and formulae bookkeeping, etc.

This tutorial is a very basic introduction to the Lean system, which consists of a functional language compiler tailored for interactive theorem proving. There is also a companion mathematical library which containing formal proofs of many theorems, and utilities to help in proving new ones.

Installation

The usual editor is VSCode with the Lean 4 extension.

Note
After following the instructions, check that the lake command is available (otherwise, take a look at the PATH environment variable.)

Create a new Lean project

Go to some directory and type:

lake new butters math

replace "butters" with the project name. This will create a new butters subdirectory. Go inside it, and type:

lake update ; lake exe cache get

This will take some time and is needed in order to setup Mathlib4. Now start VSCode:

code .

It could need a "cache rebuild" (see the messages.) Then we are ready. The first time it may take a lot of time to "react".

From this point, *.lean source files may be created to contain proofs. We usually add the following import at the beginning of each Lean source file:

import Mathlib.Tactic
Note
When importing "Mathlib", the system may freeze for some time.
Important
In our opinion, trying to learn Lean just by reading documents (like this one) is a doomed endeavor. Executing the examples and experimenting variations while carefully understanding the evolution of the goals, subgoals, and their associated hypothesis (in the Lean InfoView) is mandatory for a successful experience with this powerful tool.

Exploratory Proofs

An existential assertion

We’ll begin with the proof of a trivial "existential" fact:

There is some natural number such that it multiplied by itself gives 25.

This assertion may be formally written this way:

example :  x : , x * x = 25 :=
  sorry

The sorry "wildcard" signals that we do not have (yet) a proof for the assertion. In complex proofs some parts may be marked with sorry in order to allow their incremental development.

Note
The symbol may be obtanined (in VSCode) typing \ex. This is a common pattern for the math symbols like (\R) or (\N), etc.

For proving the existence, we should provide a "witness" term, in this case consisting of a value 5, and a proof that its self product gives 25:

example :  x : , x * x = 25 :=
  have h : 5 * 5 = 25 := by rfl
  5, h

The term ⟨5, h⟩ is a term which uses that witness, where h is an auxiliary proof for the existentially applied formula instantiated to the witness 5.

The rfl tactic

The h auxiliary proof is built using the rfl tactic which asserts that two terms identified as "the same" are equal; the technical name for this identification is "definitional equality", also known as "judgmental equality" and as "computational equality".

Note
In practice, it identifies two terms which may be reduced to a common expression (normal form) applying the defining properties of their components and some "fundamental" transformations: in "A survey of the project AUTOMATH", de Bruijn specify the application of delta, lambda and eta reductions (that is, replacement of the definitions, replacements of the a value in a parametrized abstracted formula, and the elimination of the abstraction when the applied parameter is not present in the formula.)
Table 1. Lambda calculus conversions

α-conversion

( λ x, f x) = ( λ y, f y)

β-conversion

( λ x, f x) a = f a

δ-conversion

double 5 = 5 + 5

ζ-conversion

(let n : N := 2 in n + n) = 4

η-conversion

( λ x, f x) = f

ι-conversion

prod.fst (a, b) = a

In our example, the product 5 * 5 may be reduced to the simpler number 25 (which in turn may be expressed as 25 applications of the "successor" function to zero, as usually explained in lambda calculi.)

Note
Definitional equality contrasts to "propositional equality" (also called "book equality" by de Bruijn) which corresponds to a proposition which identifies two terms of the same type; this identification relies on the application of axioms and theorems. Definitional equality implies propositional equality but the reverse is not true.
Note
A related criteria of equality is referred as "extensional equality" conveys the idea that objects are equal if they have the same external properties. This criteria is applied to several mathematical objects like in the "set extensionality axiom" (two sets are equal if they contain the same elements) or "function extensionality"(given two functions, for the same inputs given to them, their corresponding outputs are the same.)

Note that since rfl is a tactic, it is written after the by keyword.

Tactics may be written in the following lines after by, but must have a higher indentation:

example :  x : , x * x = 25 :=
  have h : 5 * 5 = 25 := by
    rfl -- extra identation is mandatory here!
  5, h
def rfl {α : Sort u} {a : α} : a = a

The related Eq.refl is similar to rfl but expects an explicit argument. Below we use #check to inspect the type of an expression (see the response in the "Lean Infoview" window of VSCode):

#check Eq.refl 3  -- type is: 3 = 3

A more general way to prove assertions consists in "applying" theorems with apply; the rfl tactic is an abbreviation of the application of the rfl theorem, but there is also a related refl theorem, and the "equality" class Eq also provides a reflection theorem:

example :  x : , x * x = 25 :=
  have h : 5 * 5 = 25 := by apply rfl
  5, h

example :  x : , x * x = 25 :=
  have h : 5 * 5 = 25 := by apply refl
  5, h

example :  x : , x * x = 25 :=
  have h : 5 * 5 = 25 := by apply Eq.refl
  5, h

From the Lean documentation: "The norm_num tactic is specifically designed to handle numerical proofs. It cannot be used to simplify or prove goals involving non-numerical expressions or types."

example :  x : , x * x = 25 :=
  have h : 5 * 5 = 25 := by norm_num
  5, h

The simp tactic is a more powerful one which includes the norm_num functionality:

example :  x : , x * x = 25 :=
  have h : 5 * 5 = 25 := by simp
  5, h
Theorems and Lemmas

The proofs with a name are called theorems or lemmas:

theorem ex_aux1: 5 * 5 = 25 := by rfl

theorem ex_aux2: 5 * 5 = 25 := rfl -- syntactic sugar

theorem ex_aux3: 5 * 5 = 25 := by apply rfl

theorem ex_aux4: 5 * 5 = 25 := by apply refl

theorem ex_aux5: 5 * 5 = 25 := by apply Eq.refl

theorem ex_aux6: 5 * 5 = 25 := by exact rfl

Note the exact tactic which is a stricter version of apply, requiring an almost perfect match of the goal to the theorem. Later will see that apply allows the "partial" application of theorems to a goal, effectively transforming it into (hopefully) simpler ones.

The theorems present distinct proofs of the same assertion. Lean considers them as definitionaly equal:

#check ex_aux1 = ex_aux2

That means that the usage of any of these theorems may be used indistinguishably; this notion is known as "proof irrelevance".

The theorem names may be used as proof terms. Here we use ex_aux1 in order to trivially prove the auxiliary hypothesis h:

example :  x : , x * x = 25 :=
  have h : 5 * 5 = 25 := ex_aux1
  5, h

But h is no longer needed:

example :  x : , x * x = 25 := 5, ex_aux1

That is, h was a theorem created "on the fly" inside the example proof (whose scope was also restricted to the proof.)

Lean is keen to deduce the arguments from the usage context; the underscore works as a wildcard to hint such a deduction:

example :  x : , x * x = 25 := _, ex_aux1

Given the witness 5, it is possible to employ an abbreviation for inserting tactics as needed:

example :  x : , x * x = 25 := 5, by rfl
example :  x : , x * x = 25 := 5, rfl -- syntactic sugar
example :  x : , x * x = 25 := 5, by apply rfl
example :  x : , x * x = 25 := 5, by apply refl
example :  x : , x * x = 25 := 5, by apply Eq.refl
example :  x : , x * x = 25 := 5, by exact rfl
example :  x : , x * x = 25 := 5, by norm_num
example :  x : , x * x = 25 := 5, by simp
The use tactic

This tactic may be employed to provide the witness and automatically create the associated proof term:

example :  x : , x * x = 25 := by
  use 5

Note that use 5 is not a proof term but the application of a tactic; that is the reason it must be prefixed by the by keyword.

In the following example, the witness 5 is not enough for Lean to build the proof term:

example :  x : , x * x + 1 = 26  x > 4 := by
  use 5
  sorry

After the use 5 there is a remaining non trivial goal (it can be seen just moving the cursor after use 5 and checking the informational view in VSCode in the "Tactic state" zone):

5 * 5 + 1 = 26 ∧ 5 > 4

In the previous example the remaining goal was the trivial 5 * 5 = 25 which was automatically accepted being a definitional equality.

Now, the remaining goal may be solved with a further application of the norm_num or simp tactics:

example :  x : , x * x + 1 = 26  x > 4 := by
  use 5
  norm_num

example :  x : , x * x + 1 = 26  x > 4 := by
  use 5
  simp

Now let’s try to provide a proof without appealing to norm_num nor simp. After use 5 the remaining goal:

5 * 5 + 1 = 26 ∧ 5 > 4

is the conjunction of the assertions 5 * 5 + 1 = 26 and 5 > 4. The constructor tactic divides this goal into the constituting subgoals. The first one is a definitional equality as discussed previously, so it may be solved with the rfl tactic. The second one is definitionaly equivalent to

4 < 5

Also, five is the successor of four: 5 = Nat.succ 4, so we may apply the theorem Nat.lt.base:

#check Nat.lt.base -- ∀ (n : ℕ), n < Nat.succ n

example :  x : , x * x + 1 = 26  x > 4 := by
  use 5
  constructor
  rfl
  apply Nat.lt.base

The closing application may be made more clear with the exact tactic, which by itself does not try to deduce the parameter of Nat.lt.base, so it must be explicitly provided (that is, unlike the apply case, it is not enough to specify exact Nat.lt.base without the 4.)

example :  x : , x * x + 1 = 26  x > 4 := by
  use 5
  constructor
  rfl
  exact Nat.lt.base 4

The show tactic is used to provide a hint to Lean (and to an ocassional reader) about the current goal or subgoal which is being proved. In the following proof it is used to detail the subgoals created by the constructor tactic:

example :  x : , x * x + 1 = 26  x > 4 := by
  use 5
  constructor
  show 5 * 5 + 1 = 26
  rfl
  show 4 < 5
  apply Nat.lt.base

Alternatively to constructor the And.intro (introduction rule) may be applied:

example :  x : , x * x + 1 = 26  x > 4 := by
  use 5
  apply And.intro
  rfl
  exact Nat.lt.base 4

Since the rfl and exact solve the two goals "derived" from the current one, it would be handy to indent the formers with respect to the later, but this is not allowed (since the derived goals replaced the conjunction at the same "level".) The "dot" syntax may be employed to force the indentation:

example :  x : , x * x + 1 = 26  x > 4 := by
  use 5
  apply And.intro
  . rfl
  . exact Nat.lt.base 4

School algebra

Consider the following trivial and equivalent proofs for a statement on three real numbers (which is valid on any ring):

example (a b c : ) : (a - b) + c = a + (c - b) := by
  exact add_comm_sub a b c

example (a b c : ) : (a - b) + c = a + (c - b) := by
  apply add_comm_sub -- arguments deduced by Lean

The add_comm_sub points to a theorem which relates the addition, the subtraction and the commutativity:

#check add_comm_sub -- a - b + c = a + (c - b)

Note that the theorem body may be obtained by typing #print add_comm_sub.

Now, if we are to deal with several real variables, instead of declaring their types in every example, we may set their type in a region of the file using the variable keyword inside a section…​end block:

section
variable (a b c : )

example : (a - b) + c = a + (c - b) := by exact add_comm_sub a b c

example : (a - b) + c = a + (c - b) := by apply add_comm_sub

end

A more interesting question is how we discovered the add_com_sub theorem? besides navigating in the Mathlib documentation, we may ask Lean to provide a hint for suitable theorems using the syntax:

section
variable (a b c : )

example : (a - b) + c = a + (c - b) := by apply?

end

in the Lean infoview, we obtain this helpful message: Try this: exact add_comm_sub a b c.

If can’t be overemphasized how useful is the apply? feature. In complex proofs it may be added at any stage, and will try to provide a suggestion using the active hypothesis.
In some cases, many unrelated active hypothesis may slow its search, so it is useful to isolate the target statement in simpler example containing just the required hypothesis for a faster apply?.
As the proof files grow, Lean gets slower to process the user modifications; for this reason it may be a good idea to have a small "scratch" Lean file with examples dedicated to look for apply? suggestions.

In line with these suggestions, regarding the Lean tactics (like apply) a good heuristic may be found in the "Hitchhiker’s Guide to Logical Verification":

Of course, you could look up the […​] scientific paper that describes its underlying algorithm, or even read the source code. But this might not be the most efficient use of your time. In truth, even expert users of proof assistants do not fully understand the behavior of the tactics they use daily. The most successful users adopt a relaxed, sporty attitude, trying tactics in sequence and studying the emerging subgoals, if any, to see if they are on the right path.

— https://github.com/blanchette/logical_verification_2023

Now we’ll prove a standard algebraic identity from elementary school: Given the reals a and b, the following holds:

a ^ 3 - b ^ 3 = (a - b) * (a ^ 2 + a * b + b ^ 2)
Rewriting

We’ll use the rewrite or rw tactic, in which the proof goal is gradually rewritten using auxiliary equality theorems like the previously mentioned add_comm_sub. That is, rw [t] assumes there is a t theorem with the form A = B (also for A ↔ B); the proof goal must have some subexpression matching the "A" pattern, and a new goal emerges using the "B" pattern as replacement. The syntax rw [←t] looks for some subexpression following "B" to be replaced for "A".

Note
The rewrite has the syntax rw [theorem-name arguments…​] but in many cases Lean may deduce the arguments so it is enough with the theorem name; when there are several patterns matching a theorem name, an ambiguity arises. In this case some arguments may be provided in order to break such ambiguity.
Note
The symbol is obtained in VSCode with the combination \l. By the way, the "conditional" and bi-conditional symbols are obtained with \to and \iff, respectively.

The next proof is a bit long but not very difficult to follow. By the way, we are not asserting that this is the minimal one:

example : a ^ 3 - b ^ 3 = (a - b) * (a ^ 2 + a * b + b ^ 2) := by
  rw [mul_add, -- (a - b) * (a ^ 2 + a * b) + (a - b) * b ^ 2
      mul_add, -- (a - b) * a ^ 2 + (a - b) * (a * b) + (a - b) * b ^ 2
      sub_mul, -- a * a ^ 2 - b * a ^ 2 + (a - b) * (a * b) + (a - b) * b ^ 2
      sub_mul, -- a * a ^ 2 - b * a ^ 2 + (a * (a * b) - b * (a * b)) + (a - b) * b ^ 2
      sub_mul, -- a * a ^ 2 - b * a ^ 2 + (a * (a * b) - b * (a * b)) + (a * b ^ 2 - b * b ^ 2)
      add_sub, -- a * a ^ 2 - b * a ^ 2 + (a * (a * b) - b * (a * b)) + a * b ^ 2 - b * b ^ 2
      pow_two, -- a * (a * a) - b * (a * a) + (a * (a * b) - b * (a * b)) + a * b ^ 2 - b * b ^ 2
      pow_three, -- a ^ 3 - b * (a * a) + (a * (a * b) - b * (a * b)) + a * b ^ 2 - b * b ^ 2
      pow_two, -- a ^ 3 - b * (a * a) + (a * (a * b) - b * (a * b)) + a * (b * b) - b * (b * b)
      pow_three, -- a ^ 3 - b * (a * a) + (a * (a * b) - b * (a * b)) + a * (b * b) - b ^ 3
      add_comm_sub, -- a ^ 3 + (a * (a * b) - b * (a * b) - b * (a * a)) + a * (b * b) - b ^ 3
      sub_right_comm, -- a ^ 3 + (a * (a * b) - b * (a * a) - b * (a * b)) + a * (b * b) - b ^ 3
      mul_comm b, -- a ^ 3 + (a * (a * b) - a * a * b - b * (a * b)) + a * (b * b) - b ^ 3
      mul_assoc, -- a ^ 3 + (a * a * b - a * a * b - b * (a * b)) + a * (b * b) - b ^ 3
      sub_self, -- a ^ 3 + (0 - b * (a * b)) + a * (b * b) - b ^ 3
      zero_sub, -- a ^ 3 + -(b * (a * b)) + a * (b * b) - b ^ 3
      mul_comm, -- a ^ 3 + -(a * b * b) + a * (b * b) - b ^ 3
      mul_assoc, -- a ^ 3 + -(b * b * a) + a * (b * b) - b ^ 3
      mul_comm, -- a ^ 3 + -(b * (a * b)) + b * (a * b) - b ^ 3
      add_assoc, -- a ^ 3 + (-(b * (a * b)) + b * (a * b)) - b ^ 3
      add_left_neg, -- a ^ 3 + 0 - b ^ 3
      add_zero] -- done!

The "right hand side" of the goal is rewritten as needed, and its value is shown as comments. In practice, the reader should follow the proof hovering the cursor over the rewriting theorems and looking the changing goal in the "Lean Infoview" panel. In the following proofs of this document we will not show the goal transformation at this detail level, so the user should get used to read the Lean Infoview.

The proof expands the products of the second member of the identity using mul_add, sub_mul and add_sub. The resulting expression contains terms obviously cancelable.

Using pow_two we convert the a ^ 2 to a * a (in two places) and later the same conversion is done for b. Then ←pow_three converts a * (a * a) into a ^ 3 (note the arrow to apply the conversion in reverse direction); the same is done for b.

Sometimes a theorem pattern appears at several points of the goal expression, and rw applies to the first one. If this is not desired, it is possible to provide some arguments to the theorem to disambiguate the target replacement subexpression; see for example the usage of sub_right_comm.

Finally, a number of theorems are applied in order to reorder the terms which is a bit convoluted since we have additions, subtractions and those operations are binary, so there are many implicit parenthesis which force a careful reassociation con commutation. The key insight is to ask Lean for the required theorems using apply? as previously explained.

The following proof is even more involved but may be more interesting. It rewrites the binomial a ^ 3 - b ^ 3 in order to achieve the factored expression of the second member. We prove a couple of helper hypothesis (which may be considered internal lemmas) using have:

example : a ^ 3 - b ^ 3 = (a - b) * (a ^ 2 + a * b + b ^ 2) := by
  have add_and_sub_to_diff (x y z : ) : x - y = x - z - y + z := by
    rw [add_neg_eq_iff_eq_add, add_comm_sub, sub_sub,
      sub_add_eq_sub_sub, add_sub_assoc]
    rfl
  have h_dif_sq : a ^ 2 - b * b = (a + b) * (a - b) := by
    rw [add_and_sub_to_diff, pow_two, mul_sub, add_comm_sub,
      sub_mul, mul_comm (a - b), add_mul]
  rw [add_and_sub_to_diff, pow_three', pow_two, mul_sub,
    pow_three', add_comm_sub, sub_mul,
    h_dif_sq, mul_assoc, mul_comm (a - b), mul_assoc,
    add_mul, mul_comm, add_mul, add_assoc, pow_two]

Note that the add_and_sub_to_diff hypothesis is valid for any three expressions of real type, so we define as processing three (generic) real numbers to be provided as parameters. Just for clarity, we used the new variable names x, y and z (though it could be fine with a, b and c as shown in h_dif_sq.) Note the use of rfl in the the proof, which leverages the fact that a - b is the same as a + (-b) by definition.

Using this hypothesis, another one is proved: a ^ 2 - b * b = (a + b) * (a - b), and finally the identity is proved with its help.

Calc environment

A third variant uses the calc tactic which is used to prove statements of the form A = B by a chain of partial proofs of A = X1, X1 = X2…​ XN = B.

example : a ^ 3 - b ^ 3 = (a - b) * (a ^ 2 + a * b + b ^ 2) := by
  have add_and_sub_to_diff (x y z : ) : x - y = x - z - y + z := by
    calc
      x - y = x + (- y) := by rfl
      _ = x + -z + -y + z:= by
        rw [add_zero (x + (- y)), add_left_neg z,
          add_assoc, add_assoc (-y), add_comm (-y), add_assoc, add_assoc]
      _ = x - z - y + z := by rfl
  have h_dif_sq : a ^ 2 - b * b = (a + b) * (a - b) :=
    calc
      a ^ 2 - b * b = a * a - a * b - b * b + a * b := by
        rw [add_and_sub_to_diff (a ^ 2) (b * b) (a * b), pow_two]
      _ =  a * (a - b) + (a * b - b * b) := by
        rw [mul_sub, add_comm_sub]
      _ = (a + b) * (a - b) := by
        rw [sub_mul, mul_comm (a - b), add_mul]
  calc
      a ^ 3 - b ^ 3 = a ^ 3 - a ^ 2 * b - b ^ 3 + a ^ 2 * b := by
        rw [add_and_sub_to_diff (a ^ 3) (b ^ 3) (a ^ 2 * b)]
      _ = a ^ 2 * (a - b) + (a ^ 2 * b - b * b * b) := by
        rw [pow_three', pow_two, mul_sub, pow_three', add_comm_sub]
      _ = a ^ 2 * (a - b) + (a + b) * b * (a - b) := by
        rw [sub_mul, h_dif_sq, mul_assoc, mul_comm (a - b), mul_assoc]
      _ = (a - b) * (a ^ 2 + a * b + b ^ 2) := by
        rw [add_mul, mul_comm, add_mul, add_assoc, pow_two]

Note that the proof of h_dif_sq employs calc as a proof term and not as a tactic (so no by keyword is present before it.) That is, calc works as a tactic and as a proof term.

Finally, the best method for the present case is to employ the ring tactic, which automates proofs for expressions involving rings:

example : a ^ 3 - b ^ 3 = (a - b) * (a * a + a * b + b * b):= by ring

As a practice for the reader, let’s prove a related theorem:

If three reals sum zero, then the sum of its cubes equals three times their product.

Some proofs below:

example (h: a + b + c = 0) : a ^ 3 + b ^ 3 + c ^ 3 = 3 * a * b * c := by
  rw [add_eq_zero_iff_eq_neg, neg_eq_iff_eq_neg, eq_comm] at h -- c = -(a+b)
  rw [h]
  ring

example (h: a + b + c = 0) : a ^ 3 + b ^ 3 + c ^ 3 = 3 * a * b * c := by
  have h₁ := add_eq_zero_iff_eq_neg.mp h
  have h₂ := neg_eq_iff_eq_neg.mpr h₁
  have h₃ := h₂.symm -- also: eq_comm.mp h₂
  rw [h₃]
  ring

example (h: a + b + c = 0) : a ^ 3 + b ^ 3 + c ^ 3 = 3 * a * b * c := by
  have h' : a = -(b + c) := by linarith only [h]
  rw [h']
  ring

The last proof leverages the linarith tactic, which is used to verify if an equality or inequality may be satisfied using linear arithmetic operations based in the context hypothesis. Using linarith by default employs all the available hypothesis in order to prove the target, but may get too slow if there are many of them. In that case, it is faster to use linarith only […​] for restricting its process to the specified hypothesis; also, is good for understanding the facts that are being used by the tactic.

Naturals

Now consider the previous identity for the naturals:

example (a b: ): a ^ 3 - b ^ 3 = (a - b) * (a ^ 2 + a * b + b ^ 2) := by ...

In standard math this requires that a ≥ b, otherwise the subtraction is undefined. In Lean such subtraction is allowed but defined as zero:

#eval 5 - 7 -- answer is 0

So, this identity is valid in Lean even without needing to add the extra hypothesis a ≥ b.

We may not use the ring tactic since the naturals do not form a ring. There is a ring_nf tactic which is a bit more powerful and may deal with simple cases involving subtractions with naturals when there is no need to introduce the mentioned extra hypothesis. We combine these tactics in the following proof, which transforms the second member until we get "simple" cancelable terms to apply ring_nf:

example (a b: ): a ^ 3 - b ^ 3 = (a - b) * (a ^ 2 + a * b + b ^ 2) := by
  have h0 : b * (a ^ 2 + a * b + b ^ 2) = b * a ^ 2 + b * a * b + b * b ^ 2 :=
    by ring
  have h1: b * a ^ 2 + b * a * b = a * (a * b) + a * b ^ 2 :=
    by ring
  rw [Nat.mul_sub_right_distrib, Nat.mul_add, Nat.mul_add,
    h0, Nat.sub_add_eq, add_assoc, h1, Nat.add_sub_cancel]
  -- goal is now: a ^ 3 - b ^ 3 = a * a ^ 2 - b * b ^ 2
  ring_nf

Divisibility

Lean provides a divisibility operator '∣' which is obtained typing a backslash followed by the pipe key: \|:

#eval 7  21 -- true
#eval 7  22 -- false
Note
Lean defines 0 ∣ n as true for n=0 and false otherwise.

From an equality like c * a = b it follows that a divides b and that c also divides b. There is a theorem which helps in the first case:

#check Dvd.intro_left -- (c : ℕ) (h : c * a = b) : a ∣ b

example (h: 5 * 4 = 20 ) : 4  20 := Dvd.intro_left 5 h

example (h: 5 * 4 = 20 ) : 4  20 := by exact Dvd.intro_left 5 h

It is trivial to prove the second case:

example (h: 5 * 4 = 20 ) : 5  20 := by
  rw [mul_comm] at h
  exact Dvd.intro_left 4 h

We need this implementation since there is no corresponding theorem Dvd.intro_right (at least in the current version of Mathlib.) Another approach consists of implementing a new theorem dvd_intro_right1 for further usage:

theorem dvd_intro_right1 (a b c : ) (h: a * c = b) : a  b := by
  rw [mul_comm] at h
  exact Dvd.intro_left c h

example (h: 5 * 4 = 20 ) : 5  20 := dvd_intro_right1 5 20 4 h

Having to write dvd_intro_right1 5 20 4 h is cumbersome, since all the variables could be deduced from the h expression. A better alternative is to specify the variables in braces {…​}, which is a hint to Lean to deduce their values from the accompanying expressions (like h); so the proof consists in the term dvd_intro_right2 h:

theorem dvd_intro_right2 {a b c : } (h: a * c = b) : a  b := by
  rw [mul_comm] at h
  exact Dvd.intro_left c h

example (h: 5 * 4 = 20 ) : 5  20 := dvd_intro_right2 h

Now the variables are "implicit". As another iteration we may create a section declaring the implicit variables with braces, which is useful when there are several theorems leveraging such declarations:

section
variable {a b c d: }

theorem dvd_intro_right3 (h: a * c = b) : a  b := by
  rw [mul_comm] at h
  exact Dvd.intro_left c h

example (h: 5 * 4 = 20 ) : 5  20 := dvd_intro_right3 h
end

In Lean there is a function gcd for the greatest common divisor; also, there is a definition for coprimality corresponding to gcd m n = 1:

#eval Nat.Coprime 4 5   -- true
#eval Nat.Coprime 4 50  -- false

Let’s review the Nat.Coprime.dvd_mul_right theorem:

#check Nat.Coprime.dvd_mul_right
-- {m n k : ℕ} (H : Nat.Coprime k n) : k ∣ m * n ↔ k ∣ m

The type is of biconditional form, from which it is possible to extract the conditional forms in "left to right" direction using the mp suffix, and the "right to left" direction with the mpr suffix. The following theorems illustrate the mp direction (assume that they are in a section with the implicit a, b, c, and d variables):

theorem bDivCd (h1: b  c * d) (h2: gcd b d = 1) : b  c := by
  exact (Nat.Coprime.dvd_mul_right h2).mp h1

theorem dDivAb (h1: d  a * b) (h2: gcd b d = 1) : d  a := by
  have h3 : gcd d b = 1 := by rw [gcd_comm, h2]
  apply (Nat.Coprime.dvd_mul_right h3).mp h1

Logical arguments

At this point we recommend the great Logic and Proof online book, which at this time is based on Lean 3: https://lean-lang.org/logic_and_proof/; the basics of "natural deduction" and "propositions as types" ideas are described in detail.

Here we’ll not explain this topic but leave a brief citation for the interested:

By using terms and types to represent proofs and propositions, dependent type theory achieves a considerable economy of concepts. The question “Is H a proof of P?” becomes equivalent to “Does the term H have type P?” As a result, inside of Lean, there is no proof checker, only a type checker.

— Hitchhiker's Guide to Logical Verification

In Lean the propositions we want to prove are codified by a type; the proof for any of these propositions correspond to a term of that type.

So the existence of a term of the type corresponding to the proposition does mean that a proof is available, hence is valid (or in an informal sense, simply "true".) There could be more than one term with the proposition’s type, which means that more than one proof is available for it.

Converselly, when there is no corresponding term for the proposition’s type, then the proposition is not (yet) proved.

In order to prove that the proposition is not valid (i.e. that is "false"), we must prove its negation; that is, for the proposition P (with type P) we must provide a term with type ¬P.

The type which contains the proposition’s types is Prop (a type of types.) So the elements of Prop are an infinity of types corresponding to the propositions, which may be proved or not.

As previously commented the proposition’s types which have at least one proof (term) may be considered valid or "true", and those which (provably) not, may be regarded as not valid or "false". But this is just an informal interpretation which is not codified in Lean.

Anyway, Lean provides a Bool type having exactly the two values true and false, which may be used to build of terms in the same way as numeric constants and variables.

The authoritative Theorem Proving in Lean 4 has also a chapter which shows how to deal with logic arguments in Lean 4: https://lean-lang.org/theorem_proving_in_lean4/propositions_and_proofs.html ; here we do a quick review mostly oriented to the available syntax which (in our opinion) is daunting at first.

Since we are dealing with classical logic, we may leverage this section to introduce basic set theory: the set operations are defined in terms of logical connectives which relate the element ownership relation.

Conjunction commutativity

This is a basic theorem for which we provide several term proofs:

variable {α : Type*}
variable (s t u : Set α)
variable (p q r : Prop)

#check And.left  -- {a b : Prop} (self : a ∧ b) : a
#check And.right -- {a b : Prop} (self : a ∧ b) : b

example (h: p  q) : q  p := And.intro h.right h.left
example (h: p  q) : q  p := And.intro h.2 h.1
example (h: p  q) : q  p := h.right, h.left

The And.left and And.right structure members provide the conjunction components (where And.left conj may be abbreviated as conj.left, same with And.right.)

Note the ⟨…​ , …​⟩ abbreviated syntax to "build" a conjunction, and the usage of .1 and .2 to mean .left and .right.

Next we introduce proofs using tactic mode leveraging the And.comm theorem and the previously introduced constructor tactic:

#check And.comm -- {a b : Prop} : a ∧ b ↔ b ∧ a

example (h: p  q) : q  p := And.comm.mp h

example (h: p  q) : q  p := by
  exact And.comm.mp h

example (h: p  q) : q  p := by
  rw [And.comm] ; exact h

example (h: p  q) : q  p := by
  rw [And.comm] at h ; exact h

example (h: p  q) : q  p := by
  constructor
  exact h.right -- or h.2
  exact h.left  -- or h.1

The following examples prove the equivalence of the commuted conjunctions using previously reviewed methods. Note the usage of the @ prefix to switch the implicit arguments of And.comm into explicit ones:

example : p  q  q  p := And.comm

example : p  q  q  p := @And.comm p q

example : p  q  q  p :=
  have h1 : p  q  q  p := fun pq => pq.2, pq.1
  have h2 : q  p  p  q := fun qp => qp.2, qp.1
  Iff.intro h1 h2

example : p  q  q  p := by
  rw [And.comm]

example : p  q  q  p := by
  constructor
  intro pq
  exact pq.2, pq.1
  intro qp
  exact qp.2, qp.1

Now we show the commutativity of set intersection; take a moment to compare with the equivalence of conjunctions:

example : s  t = t  s := Set.inter_comm s t

example : s  t = t  s := by exact Set.inter_comm s t

example : s  t = t  s := by rw [Set.inter_comm]

example : s  t = t  s := by
  ext h
  -- goal is: h ∈ s ∩ t ↔ h ∈ t ∩ s
  exact and_comm

example : s  t = t  s := by
  ext h
  constructor
  -- first goal is: h ∈ s ∩ t → h ∈ t ∩ s
  -- with tactics
  intro hInST ; exact hInST.2, hInST.1
  -- next goal is: h ∈ t ∩ s → h ∈ s ∩ t
  -- direct term construction
  exact fun hInTS => hInTS.right, hInTS.left

example : s  t = t  s := by
  ext h
  constructor
  -- first goal is: h ∈ s ∩ t → h ∈ t ∩ s
  -- with tactics
  intro hr, hl ; exact hl, hr
  -- next goal is: h ∈ t ∩ s → h ∈ s ∩ t
  -- direct term construction
  exact fun hr, hl => And.intro hl hr

The next examples prove the disjunction commutativity with explicit terms and tactic mode:

example (h: p  q) : q  p :=
  have i1 : p  q  p := fun xp => Or.inr xp
  have i2 : q  q  p := fun xq => Or.inl xq
  Or.elim h i1 i2

example (h: p  q) : q  p := by
  cases h with
  | inl xp => exact Or.inr xp
  | inr xq => exact Or.inl xq

example (h: p  q) : q  p := by
  rcases h with hp | hq
  exact Or.inr hp
  exact Or.inl hq

example (h: p  q) : q  p := by exact Or.comm.mp h

Now we prove a bit more complex assertion with a proof term and with tactics:

example : (p  q)  (p  r)  p  (q  r) :=
  fun pqOrPr =>
    Or.elim pqOrPr
      (fun pq => pq.1, Or.inl pq.2⟩)
      (fun pr => pr.1, Or.inr pr.2⟩)

example : (p  q)  (p  r)  p  (q  r) := by
  intro pqOrPr
  have pqToGoal : p  q  p  (q  r) := by
    intro (pq : p  q)
    exact pq.1, Or.inl pq.2
  have prToGoal : p  r  p  (q  r) := by
    intro (pr : p  r)
    exact pr.1, Or.inr pr.2
  exact Or.elim pqOrPr pqToGoal prToGoal

example : (p  q)  (p  r)  p  (q  r) := by
  rintro (⟨xp , xq | xp, xr⟩)
  have qr : q  r := Or.inl xq
  exact And.intro xp qr
  have qr : q  r := Or.inr xr
  exact And.intro xp qr

The corresponding set statements are also proved (this is an exercise taken from Mathematics in Lean at https://leanprover-community.github.io/mathematics_in_lean/C04_Sets_and_Functions.html ):

example : (s  t)  (s  u)  s  (t  u) := by
  rintro _ (⟨xs , xt | xs, xu⟩)
  have tu := by exact Set.mem_union_left u xt
  . exact xs, tu
  have tu := by exact Set.mem_union_right t xu
  . exact xs, tu

example : (s  t)  (s  u)  s  (t  u) := by
  rintro z (⟨xs , xt | xs, xu⟩)
  have tu : z  t  u := by left ; exact xt
  . exact xs, tu
  have tu : z  t  u := by right ; exact xu
  . exact xs, tu

Another important relation is proved by a proof term and by tactics. Note that the implication associates to the right, so p → q → r is the same as p → (q → r):

example : (p  (q  r))  (p  q  r) :=
  fun pqr =>
    fun pq =>
      pqr pq.1 pq.2

example : (p  q  r)  (p  q  r) := by
  intro pqr
  intro pq
  exact pqr pq.1 pq.2

The following proof illustrates the fact that ¬p is defined as p → False:

example : p  ¬ (¬ p  q) :=
  fun xp => fun (npq : ¬p  q) => npq.1 xp

example : p  ¬ (¬ p  q) := by
  intro xp
  intro npq
  exact npq.1 xp

example : p  ¬ (¬ p  q) := by
  intro xp
  by_contra h
  exact h.1 xp

For sets, the "set complement" of a set corresponds to the elements which don’t belong to such set:

example (h: x  s) : ¬ (x  s) := by exact h

A corresponding set-theory assertion using the set complement for the negation:

-- ᶜ is obtained with \compl
example : s  (s  t) := by
  intro _ xs
  rw [Set.compl_inter, compl_compl]
  exact Set.mem_union_left t xs

The following theorem is proved in several ways. The first two makes use of the law of the excluded middle (em) which states that for any proposition p, then p ∨ ¬p holds; we provide a proof term and then a tactic proof.

The following proof makes use of the propext axiom and a theorem about equality. From Core.lean:

"The axiom of propositional extensionality. It asserts that if propositions a and b are logically equivalent (i.e. we can prove a from b and vice versa),then a and b are equal, meaning that we can replace a with b in all contexts."

The last proof makes use of the tauto tactic which splits the assumptions' connectives solving the goal by applying the simplified assumptions and reflectivity:

example : ¬(p  ¬p) :=
  fun pEqnp =>
    Or.elim (em p)
      (fun xp : p => pEqnp.mp xp xp)
      (fun xp : ¬p => xp (pEqnp.mpr xp))

example : ¬(p  ¬p) := by
  by_contra h
  rcases (em p) with (xp | xnp)
  exact h.mp xp xp
  exact xnp (h.mpr xnp)

#check propext -- (h: a ↔ b) : a = b

example : ¬(p  ¬p) := by
  intro pp
  have pEqNotP := propext pp
  exact false_of_a_eq_not_a pEqNotP

example : ¬(p  ¬p) := by tauto

A corresponding set theorethical statement is proved below:

example (xInU: α) : s  (s) := by
  intro n
  have ex : (xInU  s)  (xInU  s) := em _
  have a1 : (xInU  s)  False := fun inS => (n  inS) inS
  have a2 : (xInU  s)  False := fun ninS => (n  ninS) ninS
  exact Or.elim ex a1 a2

-- thanks to Kyle Miller
example [Nonempty α] : s  (s) := by
  inhabit α
  have xInU : α := default
  intro n
  have ex : (xInU  s)  (xInU  s) := em _
  have a1 : (xInU  s)  False := fun inS => (n  inS) inS
  have a2 : (xInU  s)  False := fun ninS => (n  ninS) ninS
  exact Or.elim ex a1 a2

Note that the hypothesis (xInU: α) is needed since the assertion is valid only if the α type is not empty (else, the only existing set would be empty as its complement.)

Lean provides a "predicate" Nonempty which may be used to signal that α is not empty, so a "default value" of this type may be produced.

Now, for illustration below we provide some (sub-efficient) proofs for another logical statement. The right triangle provides a rewriting capability suitable for building proof terms:

example (h: (p  q) = (p  r)) : r  (p  q)  r  (p  r) :=
  Iff.intro
    (fun c1 => Eq.subst h c1)
    (fun c2 => Eq.substr h c2)

example (h: (p  q) = (p  r)) : r  (p  q)  r  (p  r) :=
  Iff.intro
    (fun c1 => h  c1)
    (fun c2 => h  c2)

example (h: (p  q) = (p  r)) : r  (p  q)  r  (p  r) :=
  have h' : r  (p  q)  r  (p  q) := by rfl
  h  h'

example (h: (p  q) = (p  r)) : r  (p  q)  r  (p  r) := by
  constructor
  intro c1 ; rw [h] at c1 ; exact c1
  intro c2 ; rw [h] at c2 ; exact c2

example (h: (p  q) = (p  r)) : r  (p  q)  r  (p  r) := by
  rw [h]

example (h: (p  q) = (p  r)) : r  (p  q)  r  (p  r) := by
  tauto
Quantifiers

In this section we add some examples involving quantifiers. These examples were taken from the Logic and Proof book (previously referred.)

We’ll define some predicates to be employed in the examples; they are propositions which hold for some domain objects (of some type); that is, given some object a : α then a predicate P signals that some property holds for a (that is, P a may or not be the case.) So, the valid propositions (Prop) are a function of the α typed objects:

variable {A B C : α  Prop}

The universal quantifier presents an structure similar to an implication, so it may be built from an intro block. Also, it may be "eliminated" providing a concrete element following a pattern similar to modens ponens; in the first example the quantifier is eliminated in order to deal with a plain conjunction from which its left (.1) member is used to build the required proof term. In the second example of the quantifiers in the hypothesis are also eliminated using the witness w: α term created by the previous intro:


example : ( x, A x  B x)   x, A x := by
  intro axAndBx w
  exact (axAndBx w).1

example (h1 :  x, A x  B x)
  (h2 :  x, A x  C x)
  (h3 :  x, B x  C x) :  x, C x := by
  intro w
  replace h1 := h1 w
  replace h2 := h2 w
  replace h3 := h3 w
  exact Or.elim h1 h2 h3 -- or: tauto

The existential quantifier was previously explored; here we add more examples:

example : ( x, A x)   x, A x  B x := by
  intro ea
  rcases ea with w, aw
  have awOrBw : A w  B w := Or.inl aw
  exact w, awOrBw

example (h1 :  x, A x  B x) (h2 :  x, A x):  x, B x := by
  rcases h2 with w, aw
  have bw := h1 w aw
  exact w, bw

example (h1 :  x, A x  B x) (h2 :  x, B x  C x) :
     x, A x  C x := by
    rcases h1 with w, awAndBw
    have cwa : C w := h2 w awAndBw.2
    have awAndCw : A w  C w := And.intro awAndBw.1 cwa
    exact w, awAndCw

As explained previously, the negation is just an implication to False:

example : (¬  x, A x)   x, ¬ A x := by
  intro notEax x ax
  have eax :  x, A x  := x, ax
  exact notEax eax

example : ( x, ¬ A x)  ¬  x, A x := by
  intro notAx eax
  rcases eax with w, aw
  exact notAx w aw

The set theory may be worked with quantifiers and the ownership relation. Here a propositional example and a corresponding statement in set theory using the universal quantifier:

example : p  r  p  q := fun pr => Or.inl pr.1

example :  x, x  s  u  x  s  t := by
  intro w wInSandU
  have wInS : w  s := wInSandU.1
  exact Or.inl wInS

Another one dealing with logical negation and set complement:

example : ¬(p  q)  ¬p := by
  intro npq xp
  have np : p  q := Or.inl xp
  exact npq np

example :  x, x  (s  t)  x  s := by
  intro w notInSuT wInS
  have wInSuT : w  s  t := Or.inl wInS
  exact notInSuT wInSuT

The following example deals with a binary predicate:

variable {R : α  α  Prop}

example : ( x,  y, R x y)   y,  x, R x y := by
  intro exVyRxy ya
  rcases exVyRxy with w, VyRwy
  have rwya : R w ya := VyRwy ya
  exact w, rwya

The next example introduces a function f:

example (f : α  α) (h :  x, A x  A (f x)) :
   y, A y  A (f (f y)) := by
  intro ya hp
  have aux : A ya  A (f ya) := h ya
  have aux2 : A (f ya) := aux hp
  exact h (f ya) aux2
Inequalities and de Morgan

Let’s prove the following statement for naturals a, b, c and d:

¬(a < d  c < b)  d  a  b  c

One proof is based in the and_iff_not_or_not which corresponds to a "de Morgan" law; also, the x < y propositions are rewritten as ¬ (y ≤ x) with lt_iff_not_ge:

#check and_iff_not_or_not   -- a ∧ b ↔ ¬(¬a ∨ ¬b)
#check lt_iff_not_ge        -- a < d ↔ ¬d ≤ a

theorem desDeMorgan1 : ¬(a < d  c < b)  d  a  b  c := by
  have ad : a < d  ¬ d  a  := by exact lt_iff_not_ge a d
  have cb : c < b  ¬ b  c  := by exact lt_iff_not_ge c b
  rw [ad, cb]
  exact Iff.symm and_iff_not_or_not

As an exercise we now provide a more "structural" proof which in some sense is more general (even if longer in this instance); the constructor tactic was reviewed before for "splitting" goals in conjunctive form, but it also may be employed in biconditional goals transforming them into two reciprocal conditionals:

#check not_or -- ¬(p ∨ q) ↔ ¬p ∧ ¬q
#check Nat.not_lt -- ¬a < b ↔ b ≤ a

theorem desDeMorgan2 : ¬(a < d  c < b)  d  a  b  c := by
  constructor
  -- first goal: ¬(a < d ∨ c < b) → d ≤ a ∧ b ≤ c
  intro hor
  constructor
  apply Nat.not_lt.mp  -- goal is ¬(a < d)
  intro ad  -- goal is False
  have adOrcb : a < d  c < b := Or.inl ad
  exact hor adOrcb
  apply Nat.not_lt.mp  -- goal is ¬(c < b)
  intro cb  -- goal is False
  have adOrcb : a < d  c < b := Or.inr cb
  exact hor adOrcb
  -- second goal: d ≤ a ∧ b ≤ c → ¬(a < d ∨ c < b)
  intro hand
  apply not_or.mpr
  constructor
  exact Nat.not_lt.mpr hand.left
  exact Nat.not_lt.mpr hand.right

Both conditionals are proved as usual thru the introduction of an assumption with "intro". Observe the proof of negated arguments by constructing a False (or contradiction), since ¬ p is defined as p → False, following a common pattern in natural deduction.

Also, note the use of the Or.inl and Or.inl constructors corresponding to the "or introduction" natural deduction rules. Later we’ll leverage these constructors in order to prove disjunctive sentences with "proof by cases".

In the following variant of the last example, for the first conditional we introduce the contradiction tactic as an alternative to providing an explicit construction of False (like exact hor adOrcb); the contradiction tactic looks for contradictions between all the current context hypothesis.

In the reverse conditional, after the final constructor we have the ¬(a < d) and ¬(c < b) goals; just for illustration, both are solved with distinct tactics oriented to pattern matching: the already described cases is applied to the hand conjunction in order to extract the first member had from the And.intro constructor. The match tactic provides the ⟨a, b⟩ syntax to extract the conjunction components:

theorem desDeMorgan3 : ¬(a < d  c < b)  d  a  b  c := by
  constructor
  -- first goal: ¬(a < d ∨ c < b) → d ≤ a ∧ b ≤ c
  intro hor
  constructor
  apply Nat.not_lt.mp  -- goal is ¬(a < d)
  intro ad  -- goal is False
  have adOrcb : a < d  c < b := Or.inl ad
  contradiction
  apply Nat.not_lt.mp  -- goal is ¬(c < b)
  intro cb  -- goal is False
  have adOrcb : a < d  c < b := Or.inr cb
  contradiction
  -- second goal: d ≤ a ∧ b ≤ c → ¬(a < d ∨ c < b)
  intro hand
  apply not_or.mpr
  constructor
  cases hand with
  | intro hda _ => exact Nat.not_lt.mpr hda
  match hand with
  |  _, hbc  => exact Nat.not_lt.mpr hbc

The next iteration introduces a small variant for the first conditional with the by_contra tactic, which converts the goal to a negated hypothesis, and replaces the former for False. The second use of by_contra does not introduce a named (negated) hypothesis as in the previous case, but it may be applied to Or.inr by the apply …​ ; assumption pattern, which tries with all the context hypothesis, even the unnamed ones.

The same pattern for unmamed hypothesis is applied in the reverse conditional.

theorem desDeMorgan4 : ¬(a < d  c < b)  d  a  b  c := by
  constructor
  -- first goal: ¬(a < d ∨ c < b) → d ≤ a ∧ b ≤ c
  intro hor
  constructor
  apply Nat.not_lt.mp
  by_contra aLessD
  have : a < d  c < b := by apply Or.inl aLessD
  contradiction
  apply Nat.not_lt.mp
  by_contra
  have : a < d  c < b := by apply Or.inr ; assumption
  contradiction
  -- second goal: d ≤ a ∧ b ≤ c → ¬(a < d ∨ c < b)
  intro hand
  apply not_or.mpr
  constructor
  cases hand with | intro => apply Nat.not_lt.mpr ; assumption
  match hand with | _,_ => apply Nat.not_lt.mpr ; assumption

As the last iteration, we will show how to "repeat" some blocks of the proof. We note that for each conditional there are two similar blocks of tactics whose repetition may be automated:

theorem desDeMorgan5 : ¬(a < d  c < b)  d  a  b  c := by
  constructor
  -- first goal: ¬(a < d ∨ c < b) → d ≤ a ∧ b ≤ c
  intro hor
  constructor
  repeat {
    apply Nat.not_lt.mp
    by_contra
    have : a < d  c < b := by {
      first |
      apply Or.inl ; assumption |
      apply Or.inr ; assumption
    }
    contradiction
  }
  -- second goal: d ≤ a ∧ b ≤ c → ¬(a < d ∨ c < b)
  intro hand
  apply not_or.mpr
  constructor
  repeat ( cases hand with | intro => apply Nat.not_lt.mpr ; assumption )

The first conditional is a bit more complex since two distinct constructors are applied: Or.inl and Or.inr. The first | t1 | t2 …​ pattern instructs Lean to try the tactics t1, t2…​ until the first succeeds. As saw in the previous example, in both conditionals two repetitions are needed to satisfy the corresponding goals.

For this example, introducing repetitions may be overkill; in our opinion, the main problem with these "tactic combinators" is that the proof context changes are more difficult to follow in the Info View.

As another application we now review the proof by contrapositive:

In mathematics, proof by contrapositive, or proof by contraposition, is a rule of inference used in proofs, where one infers a conditional statement from its contrapositive. In other words, the conclusion "if A, then B" is inferred by constructing a proof of the claim "if not B, then not A" instead. More often than not, this approach is preferred if the contrapositive is easier to prove than the original conditional statement itself.

— https://en.wikipedia.org/wiki/Proof_by_contrapositive

The not_imp_not theorem provides a form of this argument; its "symmetric" is also easy to obtain:

#check not_imp_not -- ¬a → ¬b ↔ b → a

theorem cpos {p q : Prop} : (p  q)  (¬ q  ¬p) := by
  exact Iff.symm not_imp_not

A generalization of this argument corresponds to (p ∧ r → ¬q) ↔ (p ∧ q → ¬r) which we’ll prove now as theorem gen_cpos; we’ll prove one direction in an auxiliary theorem (or lemma) named cpaux:

lemma cpaux {p q r : Prop} (h: p  r  ¬q) : p  q  ¬r := by
  intro pq
  have p := pq.left
  have q := pq.right
  intro r' -- type r
  have t := And.intro p r'
  exact h t q

theorem gen_cpos {p q r : Prop} : (p  r  ¬q)  (p  q  ¬r) := by
  have c1 : (p  r  ¬q)  (p  q  ¬r) := by
    intro x -- type p ∧ r → ¬q
    exact cpaux x
  have c2 : (p  q  ¬r)  (p  r  ¬q) := by
    intro x -- type p ∧ q → ¬r
    exact cpaux x
  exact Iff.intro c1 c2

In logic arguments usually suffices to provide proof terms, as shown in the following equivalent version:

lemma cpaux' {p q r : Prop} (h: p  r  ¬q) : p  q  ¬r :=
  fun pq =>
    fun r' =>
      have t := And.intro (pq.left) r'
      h t (pq.right)

theorem gen_cpos' {p q r : Prop} : (p  r  ¬q)  (p  q  ¬r) :=
  have c1 : (p  r  ¬q)  (p  q  ¬r) := fun x => cpaux' x
  have c2 : (p  q  ¬r)  (p  r  ¬q) := fun x => cpaux' x
  Iff.intro c1 c2

As usual, Mathlib provides a tactic for this kind of argumentation named contrapose which converts p → q into ¬q → ¬p (and adds some basic transformation like "pushing the negations"); the reverse way is obtained with the contrapose! variant. The following listing uses it to provide another proof for the same theorem; observe the syntax ⟨p,q⟩ to build a conjunctive expression:

theorem gen_cpos'' {p q r : Prop} : (p  r  ¬q)  (p  q  ¬r) := by
  constructor
  intro pr_nq ; contrapose! ; intro r p ; exact pr_nq p,r
  intro pq_nr ; contrapose! ; intro q p ; exact pq_nr p,q

The contrapose tactic may be applied to a hypothesis, in passing switching places with the goal. The following proof uses this pattern:

theorem gen_cpos''' {p q r : Prop} : (p  r  ¬q)  (p  q  ¬r) := by
  constructor
  intro pr_nq
  contrapose! pr_nq -- (p ∧ q) ∧ r
  have ⟨⟨p, q⟩, r := pr_nq ;  exact ⟨⟨p, r⟩, q
  intro pq_nr
  contrapose! pq_nr -- (p ∧ r) ∧ q
  rcases pq_nr with ⟨⟨p, r⟩, q ; exact ⟨⟨p, q⟩, r

Note the pattern matching of the transformed expressions: those are made (just for illustration) with the have and rcases tactics.

Using all these ideas we’ll prove a theorem with a "negative" statement:

If the positive naturals a, b, c, and d are such that gcd(b,c) = 1 and either of a < d or c < b then the equation a * b = c * d can’t hold.

As an illustration of the previously generalized contrapositive argumentation, we’ll first prove an equivalent theorem:

"If the positive naturals a, b, c, and d are such that gcd(b,c) = 1 and a * b = c * d, then it is not true that either of a < d or c < b."

By de Morgan: ¬ (a ∨ b ) ↔ ¬a ∧ ¬b, and since by definition ¬ (x < y) is the same as y ≤ x, then we rewrite the statement this way:

If the positive naturals a, b, c, and d are such that gcd(b,c) = 1 and a * b = c * d, then d ≤ a and b ≤ c.

Now follows a proof of this theorem named eqAndDes which is based in two lemmas, which also rely in two previous theorems:

lemma eqDivC (h: a * b = c * d) (h2: gcd b d = 1) : b  c := by
  have h3 :  b  c * d := by exact Dvd.intro_left a h
  apply bDivCd h3
  exact h2

lemma eqDivA (h: a * b = c * d) (h2: gcd b d = 1) : d  a := by
  have h3 :  d  a * b := by exact Dvd.intro_left c h.symm
  apply dDivAb h3
  exact h2

theorem eqAndDes (h: a * b = c * d) (h1: 0 < a) (h3: 0 < c)
  (hgcd: gcd b d = 1) : d  a  b  c := by
  constructor
  apply Nat.le_of_dvd h1
  apply eqDivA h hgcd
  apply Nat.le_of_dvd h3
  apply eqDivC h hgcd

Using this result is easy to prove the original "Negative statement". For this we rearrange the last theorem statement using the generalized contrapositive argument: the equation a * b = c * d will be switched to the conclusion in negated form, and the previous conclusion returns as an hypothesis in its original (disjunctive) form:

theorem desIneq1 (h1: 0 < a) (h3: 0 < c) (hgcd: gcd b d = 1)
  (hor : a < d  c < b) : a * b  c * d := by
  intro h
  have hx : d  a  b  c := eqAndDes h h1 h3 hgcd
  cases hor with
  | inl had => exact Nat.lt_le_antisymm had hx.left
  | inr hcb => exact Nat.lt_le_antisymm hcb hx.right

The inequality a * b ≠ c * d is the same as ¬(a * b = c * d) so is also the same as a * b = c * d → False; then the conditional is proved introducing the h hypothesis which "activates" the equation, and the previous eqAndDes theorem may be applied. It is clear that this results in d ≤ a ∧ b ≤ c (assumption hx in context.)

The example showcases a proof by cases used in order to "split" the disjunction hor: the cases tagged by inl and inr (corresponding to the Or.inl and Or.inr constructors) result in new hypothesis had and hcb (for a < d and c < b respectively.) Both contradict the previously obtained hx by applying the "antisymmetry" of the ordering in the naturals:

#check Nat.lt_le_antisymm -- (h₁ : n < m) (h₂ : m ≤ n) : False

That is, both cases of the disjunction result in False, as required by the negated statement.

The next variant uses the by_contra tactic which and the previous deMorgan1 theorem:

theorem desIneq2 (h1: 0 < a) (h3: 0 < c) (hgcd: gcd b d = 1)
  (hor : a < d  c < b) : a * b  c * d := by
  by_contra eq4
  have hx : d  a  b  c := eqAndDes eq4 h1 h3 hgcd
  have := desDeMorgan1.mpr hx
  contradiction

The last two lines may be replaced by:

exact desDeMorgan1.mpr hx hor

The following is another version of the same example, but now we make explicit use of the proof by contrapositive theorem gen_cpos:

-- with gen_cpos
theorem desIneq3 (h1: 0 < a) (h3: 0 < c) (hgcd: gcd b d = 1)
  (hor : a < d  c < b) : a * b  c * d := by
  have imp : (gcd b d = 1)  (a < d  c < b)  (a * b  c * d) := by
    apply gen_cpos.mp
    intro gcdAndEq4 -- (gcd b d = 1) ∧ (a * b = c * d)
    apply desDeMorgan1.mpr
    match gcdAndEq4 with
    | l, r => exact eqAndDes r h1 h3 l
  apply imp
  constructor <;> assumption

To apply gen_cpos.mp we need to have a goal in conditional form, which we construct in the imp hypothesis. After proving it (leveraging eqAndDes, the contrapositive), the original theorem is easily proved. The constructor <;> assumption pattern applies the first tactic, then the second one is applied on each of the generated goals (the conjunction components in this case are in the context hypothesis.)

Now we provide another application of gen_cpos which avoids the auxiliary conditional imp. In this case, we construct an explicit expression with a direct application to gen_cpos; as there is no suitable conditional in the context, we must provide explicitly its arguments (or at least some of them.)

But the arguments are marked implicit for gen_cpos (remember they are {p q r : Prop}), so it is an error to provide them in explicit form.

As shown before, for this situation Lean provides the @ prefix to switch the implicit arguments into explicit ones. So we provide gcd b d = 1, a * b = c * d and a < d ∨ c < b into @gen_cpos.

-- with explicit gen_cpos
theorem desIneq4 (h1: 0 < a) (h3: 0 < c) (hgcd: gcd b d = 1)
  (hor : a < d  c < b) : a * b  c * d := by
  apply (@gen_cpos (gcd b d = 1) (a < d  c < b) (a * b = c * d)).mp
  intro gcdAndEq4
  have eq4 := gcdAndEq4.right
  have ineqs := eqAndDes eq4 h1 h3 hgcd
  rw [desDeMorgan1] at ineqs
  exact ineqs
  exact And.intro hgcd hor

Note that in this particular case, the last two arguments to @gen_cpos may be replaced by a wildcard (underscore) and Lean is able to deduce its contents by the context.

As another example prove that

There are no positive naturals such that when added to its square result in seven.

Note that this is a "negated" statement to prove, in which we attempt to achieve False:

example (hq : 0 < q) : q + q ^ 2  7 := by
  intro h
  have q01 : q = 1  2  q := by exact LE.le.eq_or_gt hq
  cases q01 with
  | inl q1 =>
    rw [q1] at h
    norm_num at h
  | inr qge2 =>
    have q2or : q = 2  3  q := by exact LE.le.eq_or_gt qge2
    cases q2or with
    | inl q2 =>
      rw [q2] at h
      norm_num at h
    | inr qge3 =>
      have q2ge9 : 9  q * q := by exact Nat.mul_le_mul qge3 qge3
      rw [ Nat.pow_two] at q2ge9
      linarith only [h, q2ge9]

The key theorem is LE.le.eq_or_gt employed to establish a disjunction from an inequality:

#check LE.le.eq_or_gt -- (h : a ≤ b) : b = a ∨ a < b

The example’s inequality is satisfied for any natural, even zero, so the hq hypothesis is not needed. Now we start from the Nat.eq_zero_or_pos theorem which states:

#check Nat.eq_zero_or_pos -- n = 0 ∨ n > 0

We’ll use this example to show a slight variant for the cases tactic which uses the case keyword and excludes the with:

example : q + q ^ 2  7 := by
  intro h
  have q00 : q = 0  0 < q := by exact Nat.eq_zero_or_pos q
  cases q00
  case inl h0 => rw [h0] at h ; norm_num at h
  case inr hq =>
    have q01 : q = 1  2  q := by exact LE.le.eq_or_gt hq
    cases q01
    case inl q1 => rw [q1] at h ; norm_num at h
    case inr qge2 =>
      have q2or : q = 2  3  q := by exact LE.le.eq_or_gt qge2
      cases q2or
      case inl q2 => rw [q2] at h ; norm_num at h
      case inr qge3 =>
        have q2ge9 : 9  q * q := by exact Nat.mul_le_mul qge3 qge3
        rw [ Nat.pow_two] at q2ge9
        linarith only [h, q2ge9]

When there is no need for the case variables, it is possible to exclude the case …​ statements (see for example https://lean-lang.org/theorem_proving_in_lean4/tactics.html) for more details. In our opinion, the existence of these syntactic variants is confusing when arguably don’t provide substantial ergonomic gains nor substantial comparative functionality.

The match tactic is a variant which also allows the evaluation of several cases. In this example it may provide a more straightforward proof:

-- Provided by Kevin Buzzard
example : q + q ^ 2  7 := by
  match q with
  | 0 => norm_num
  | 1 => norm_num
  | 2 => norm_num
  | (n + 3) =>
    ring_nf -- 12 + n * 7 + n ^ 2 ≠ 7
    apply ne_of_gt -- 7 < 12 + n * 7 + n ^ 2
    rw [Nat.add_assoc] -- 7 < 12 + (n * 7 + n ^ 2)
    apply Nat.lt_add_right -- 7 < 12
    linarith

The following proof introduces the by_cases tactic: this requires a new hypothesis which must be proved for whether it is true or false:

example (q : ) : q + q ^ 2  7 := by
  by_cases hq0 : q = 0 ; rw [hq0] ; norm_num
  by_cases hq1 : q = 1 ; rw [hq1] ; norm_num
  by_cases hq2 : q = 2 ; rw [hq2] ; norm_num
  have h₁ : 3  q := Nat.two_lt_of_ne hq0 hq1 hq2
  have h₂ : 9  q ^ 2 := by
    show 3 ^ 2  q ^ 2
    apply Nat.pow_le_pow_of_le_left ; assumption
  linarith only [h₁, h₂]

Here we introduce the cases q = 0 and q ≠ 0; the former is easy to prove by rewriting; the later is forked in two subcases q = 1 and q ≠ 1; again, the former is proven in the same way, remaining the later which (at last) is forked in the cases q = 2 and q ≠ 2.

The surviving case corresponds to q ≠ 0, q ≠ 1 and q ≠ 2, which allows the direct application of the theorem

#check Nat.two_lt_of_ne -- {n : ℕ} (h1 : n ≠ 0) (h2 : n ≠ 1) (h3 : n ≠ 2) : 2 < n

to conclude 3 ≤ q. From this we may follow the previous solutions, but to show another way we prove that 9 ≤ q ^ 2 with:

#check Nat.pow_le_pow_of_le_left -- {n m : ℕ} (h : n ≤ m) (i : ℕ) : n ^ i ≤ m ^ i

In order to help Lean to discover our intentions, we hint that the real proof is for 3 ^ 2 ≤ q ^ 2 with show.

As a slight variation the following proof uses the same h name for the cases, so they become non directly addressable. If employs the common pattern by apply theorem <;> assumption, which applies the context assumptions to the theorem as suitable:

example (q : ) : q + q ^ 2  7 := by
  by_cases h : q = 0 ; rw [h] ; norm_num
  by_cases h : q = 1 ; rw [h] ; norm_num
  by_cases h : q = 2 ; rw [h] ; norm_num
  have h₁ : 3  q := by apply Nat.two_lt_of_ne <;> assumption
  have h₂ : 9  q ^ 2 := by
    rw [Nat.pow_two]
    show 3 * 3  q * q ; apply Nat.mul_le_mul <;> assumption
    -- same as: apply Nat.mul_le_mul h₁ h₁
  linarith only [h₁, h₂]

For the second inequality we now rewrite with Nat.pow_two to get 3 * 3 ≤ q * q and apply Nat.mul_le_mul as before.

More cases

Now for another "trivial" example:

For naturals a and b, if 2 * a = 2 * b, then a = b.

The following proof is built from the trichotomy law: for naturals a and b it is true that a < b ∨ a = b ∨ a > b.

section
variable {a b c: }

theorem dobCancel0 (h: 2 * a = 2 * b): a = b := by
  have t : a < b  a = b  a > b := by exact Nat.lt_trichotomy a b
  cases t with
  | inl l =>
    have dob : 2 * a < 2 * b := by
      apply mul_lt_mul_of_pos_left l
      norm_num
    rw [h] at dob
    by_contra
    exact LT.lt.false dob
  | inr r =>
    cases r with
    | inl eq => exact eq
    | inr r =>
      have dob : 2 * b < 2 * a := by
        apply mul_lt_mul_of_pos_left r
        norm_num
      rw [h] at dob
      by_contra
      exact LT.lt.false dob

end

The "triple or" proposition is really the two disjunctions a < b ∨ (a = b ∨ a > b), which are processed by cases as seen before. Only the "equality" case may be proved (by exact eq) and we look for contradiction in the other ones (observe the LT.lt.false theorem.)

The following example illustrates the rcases tactic with a similar purpose of cases but a bit more powerful since it may "recursively" extract the corresponding cases from a compound expression. Here the three internal propositions are extracted as l, eq and r:

theorem dobCancel1 (h: 2 * a = 2 * b): a = b := by
  have t : a < b  a = b  a > b := by exact Nat.lt_trichotomy a b
  rcases t with l | eq | r
  have dob : 2 * a < 2 * b := by
    apply mul_lt_mul_of_pos_left l (by norm_num)
  rw [h] at dob
  by_contra
  exact LT.lt.false dob
  exact eq
  have dob : 2 * b < 2 * a := by
    apply mul_lt_mul_of_pos_left r (by norm_num)
  rw [h] at dob
  by_contra
  exact LT.lt.false dob

It also leverages the abbreviation pattern (by norm_num) avoiding two non substantial lines.

An interesting issue arises when asking Lean to generate a proposal for proving this example.

example (h: 2 * a = 2 * b): a = b := by apply?

At least in the current version of Lean we don’t get a useful suggestion. But since this is a very simple statement we may suspect that there is some kind of cancellation law for the expression x * a = x * b which only works for the case x ≠ 0; so we improve our test example to:

example (h: c * a = c * b) (h': 0 < c): a = b := by apply?

Now Lean provides the required help:

Try this: exact Nat.eq_of_mul_eq_mul_left h' h
Note
The hypothesis with the pattern 0 < n is common for theorems in the natural numbers.

Armed with this knowledge, a simpler proof arises:

theorem dobCancel2 (h: 2 * a = 2 * b) : a = b := by
  exact Nat.eq_of_mul_eq_mul_left (by norm_num) h

By the way, this example only involves linear expressions with constant factors, so may be solved by linarith:

theorem dobCancel3 (h: 2 * a = 2 * b) : a = b := by linarith
Note
To obtain the Lean version in use, just type:
#eval Lean.versionString -- "4.3.0-rc1"
Another inequality

As another example with cases, lets prove an "obvious" fact:

If two positive integers P and q are such that P > q, then P^2 + q^2 > 4.

A formal proof considers that the positive P can be one, two, or "three or more". The first is not possible since q < P imply that q is not positive. For the second case we have just P=2 and q=1 so P^2 + q^2 = 5 > 4. And in the last case, P^2 ≥ 9, then P^2 + q^2 ≥ 9 > 4.

example (hp : 0 < P) (hq : 0 < q) (hpq : q < P) : P ^ 2 + q ^ 2 > 4 := by
  have p1OrGe2 : P = 1  2  P := by exact LE.le.eq_or_gt hp
  cases p1OrGe2 with
  | inl p1 => -- P = 1
    linarith only [p1, hpq, hq]
  | inr pge2 => -- 2 ≤ P
    have p2OrGe3 : P = 2  3  P := by exact LE.le.eq_or_gt pge2
    cases p2OrGe3 with
    | inl p2 => -- P = 2
      have p2' : P ^ 2 = 4 := by rw [Nat.pow_two, Nat.mul_self_inj.mpr p2]
      have q1 : q = 1 := by linarith only [p2, hpq, hq]
      have q1' : q ^ 2 = 1 := by rw [Nat.pow_two, Nat.mul_self_inj.mpr q1]
      linarith only [q1', p2']
    | inr pge3 => -- 3 ≤ P
      have p2ge9 : 9  P * P := by exact Nat.mul_le_mul pge3 pge3
      rw [ Nat.pow_two] at p2ge9
      have q2ge0 : 0  q ^ 2 := by exact Nat.zero_le (q ^ 2)
      linarith only [p2ge9, q2ge0]

And now with match to derive simpler cases:

example (hp : 0 < P) (hq : 0 < q) (hpq : q < P) : P ^ 2 + q ^ 2 > 4 := by
  match q with
  | 0 => contradiction -- with hq: 0 < 0
  | (m + 1) =>
    match P with
    | 0 => contradiction -- with hp: 0 < 0
    | 1 => contradiction -- with hpq: m + 1 < 1
    | (n + 2) =>
      ring_nf -- 5 + n * 4 + n ^ 2 + m * 2 + m ^ 2
      simp [Nat.add_assoc] -- 5 + (...)
      apply Nat.lt_add_right -- 4 < 5
      norm_num

Note the usage of simp [Nat.add_assoc] which applies the provided theorem as needed to simplify the expression. It is equivalent to:

rw [Nat.add_assoc, Nat.add_assoc, Nat.add_assoc]

The following two variants build the required inequalities to apply linarith:

example (hp : 0 < P) (hq : 0 < q) (hpq : q < P) : P ^ 2 + q ^ 2 > 4 := by
  have hp2 : q ^ 2  1 := by
    show 1 ^ 2  q ^ 2
    apply Nat.pow_le_pow_of_le_left
    exact hq
  have hq2 : P ^ 2  4 := by
    show 2 ^ 2  P ^ 2
    apply Nat.pow_le_pow_of_le_left
    show 1 < P
    exact Nat.lt_of_le_of_lt hq hpq
  linarith only [hp2, hq2]

Next, observe the use of the replace tactic similar to have but replacing the hypothesis, so the context remains less polluted:

example (hp : 0 < P) (hq : 0 < q) (hpq : q < P) : P ^ 2 + q ^ 2 > 4 := by
  replace hq : q  1 := hq
  replace hpq : P  1 + q := by exact Nat.one_add_le_iff.mpr hpq
  replace hp : P  2 := by linarith only [hq, hpq]
  clear hpq -- no longer needed
  replace hq : q ^ 2  1 := by exact Nat.one_le_pow 2 q hq
  replace hp : P ^ 2  4 := by
    show P ^ 2  2 ^ 2
    apply Nat.pow_le_pow_of_le_left
    exact hp
  linarith only [hp, hq]

The following variant illustrates the power of the change tactic which allows the replacement of definitionaly equal expressions for the goal or the hypothesis.

Here is used to guide Lean in the application of Nat.add_le_add:

example (hp : 0 < P) (hq : 0 < q) (hpq : q < P) : P ^ 2 + q ^ 2 > 4 := by
  change P ^ 2 + q ^ 2  4 + 1
  apply Nat.add_le_add -- two goals: 4 ≤ P ^ 2 and 1 ≤ q ^ 2
  change P ^ 2  2 ^ 2
  apply Nat.pow_le_pow_of_le_left -- remains 2 ≤ P
  change P > 1
  apply Nat.lt_of_le_of_lt _ hpq -- transitivity: remains 1 ≤ q
  exact hq
  -- remains second goal: 1 ≤ q ^ 2
  exact Nat.one_le_pow 2 q hq

Induction

The following is the recursive definition of a function which adds the first natural numbers:

def sumNat :   
  | 0 => 0
  | n + 1 => n + 1 + sumNat n

#eval sumNat 100 -- 5050

Now we want to prove the well known theorem:

sumNat a = 0 + 1 + …​ + a = a * (a + 1) / 2

Lean provides several facilities to build proofs by induction. Mathlib provides the induction' tactic which accepts the variable (natural number) for the expression to prove, and two arguments corresponding to a name for a variable bound to the "induction hypothesis", and a name for such hypothesis (n and ih in the following example):

example : sumNat a = a * (a + 1) / 2 := by
  induction' a with n ih
  -- case zero is easy:
  -- |- sumNat Nat.zero = Nat.zero * (Nat.zero + 1) / 2
  . rw [sumNat]
    norm_num
  -- case succ
  -- |- sumNat (Nat.succ n) = Nat.succ n * (Nat.succ n + 1) / 2
  have hdef : Nat.succ n = n + 1 := by rfl
  rw [sumNat, ih, hdef]
  -- put free (n+1) as fraction
  have h : 2 * (n + 1) / 2 = n + 1  := by
    apply Nat.mul_div_cancel_left (n+1) (by norm_num)
  nth_rw 1 [h] -- only the free one
  -- adding fractions is not trivial in the naturals
  -- we are dowing: a / 2 + b / 2 = (a + b) / 2
  -- subject to 2 divides to "a"
  rw [Nat.add_div_of_dvd_right (by exact Nat.dvd_mul_right 2 (n + 1))]
  ring_nf

The expression to prove (here sumNat) must be recursivelly defined, and the cases n = 0 and P(n) → P(n + 1) are setup as the theorem goals.

The second case consists in the proof of sumNat (Nat.succ n) = Nat.succ n * (Nat.succ n + 1) / 2 with the help of the induction hypothesis sumNat n = n * (n + 1) / 2. There are some contrived operations caused by the fractional expressions over the naturals (since the division may not be exact) but they are not interesting at all.

The next variation of the same example shows the conv environment, which provides a set of commands for "navigating" in the goal subexpressions in order to rewrite them. Here we simplify the "right hand side" of the goal equality (command rhs):

example : sumNat a = a * (a + 1) / 2 := by
  induction' a with n ih
  -- case zero is easy:
  -- |- sumNat Nat.zero = Nat.zero * (Nat.zero + 1) / 2
  rw [sumNat] -- 0 = Nat.zero * (Nat.zero + 1) / 2
  norm_num
  -- case succ
  -- |- sumNat (Nat.succ n) = Nat.succ n * (Nat.succ n + 1) / 2
  rw [sumNat, Nat.mul_add, mul_one, Nat.succ_mul, Nat.mul_succ]
  have h : 2  2 * Nat.succ n := by apply Nat.dvd_mul_right
  conv =>
    rhs
    rw [add_assoc, Nat.two_mul, Nat.add_div_of_dvd_left h,
      Nat.mul_div_cancel_left _ (by norm_num), add_comm]
  have h3 : Nat.succ n = n + 1 := rfl
  rw [ih, h3]
  ring_nf

Note that inside the conv environment the number of available tactics is restricted to rewritings and theorem applications. Also, since the user is free to manipulate the subexpressions as wish, the conv environment must be explicitly terminated by breaking the indentation.

Note
See more information about the conv mode in https://leanprover-community.github.io/mathlib4_docs/docs/Conv/Guide.html

A final "improvement" follows with two invocations to conv:

example : sumNat a = a * (a + 1) / 2 := by
  induction' a with n ih
  -- case zero is easy:
  -- |- sumNat Nat.zero = Nat.zero * (Nat.zero + 1) / 2
  rw [sumNat] -- 0 = Nat.zero * (Nat.zero + 1) / 2
  norm_num
  -- case succ
  -- |- sumNat (Nat.succ n) = Nat.succ n * (Nat.succ n + 1) / 2
  rw [sumNat, Nat.mul_add, mul_one, Nat.succ_mul, Nat.mul_succ]
  have h : 2  2 * Nat.succ n := by apply Nat.dvd_mul_right
  have h' : 0 < 2 := by norm_num
  conv =>
    rhs
    rw [add_assoc, Nat.two_mul, Nat.add_div_of_dvd_left,
      Nat.mul_div_cancel_left, add_comm]
      -- rhs is Nat.succ n + (n * n + n) / 2
    . skip
    . exact h'
    . exact h
  -- lhs is n + 1 + sumNat n
  conv =>
    lhs
    rw [ih]
    -- lhs is n + 1 + n * (n + 1) / 2

In the first conv block the calls to Nat.add_div_of_dvd_left and Nat.mul_div_cancel_left no longer have they required arguments; in this case, Lean just creates new goals to be closed after the goal transformation. But this creates a problem: the conv block may not be terminated without closing all the new subgoals, but those subgoals may not be immediately proved since any command will be considered another transformation request to the current subexpression. So a new way to signal the termination is needed in order to start the prove of the subgoals: this is the meaning of the skip command, which is also applied with more indentation.

The second conv block just applies the induction hypothesis and the total goal sides are definitionaly equal so the proof ends.

For any natural a: the value of 3 ^ a - 1 is even.

A straightforward proof:

example : 2  (3 ^ a - 1) := by
  induction' a with n ih
  norm_num
  have pg1 : 3 ^ n  1 := by exact Nat.one_le_pow' n 2
  have h : 3 ^ n * (2 + 1) - 1 = 3 ^ n * 2 + 3 ^ n - 1 := by
    rw [mul_add, mul_one]
  show 2  3 ^ n * (2 + 1) - 1
  rw [h, Nat.add_sub_assoc pg1] -- 2 ∣ 3 ^ n * 2 + (3 ^ n - 1)
  apply Nat.dvd_add
  apply Nat.dvd_mul_left
  apply ih

The next example is a bit more interesting since it involves a proof which does not start in the "zero case":

For any positive natural a, the inequality 3 ^ a > a ^ 2 holds.

The following proof is based on match. Note there is no need to include the case 0 since it is excluded by the hypothesis h.

In this case the induction method is not able to close the prove since a = 1, so this case is proved appart (simply by norm_num), and the induction actually starts for a = 2; since a = n + 2 the induction formally begins from n = 0.

As usually with induction with inequalities, the goal is closed by transitivity:

example (h: a  1) : 3 ^ a > a ^ 2 := by
  match a with
    | 1 => norm_num
    | a + 2 =>
      induction' a with n ih
      norm_num
      -- need '<' to use mul_lt_mul_left
      replace ih : (n + 2) ^ 2 < 3 ^ (n + 2) := ih (by linarith)
      conv at ih =>
        tactic => have h3 : 0 < 3 := by norm_num
        rw [mul_lt_mul_left h3] -- multiply inequality by 3
        ring_nf  -- 12 + n * 12 + n ^ 2 * 3 < 3 ^ n * 27
      change (n + 3) ^ 2 < 3 ^ (n + 2) * 3 -- drop Nat.succ
      ring_nf -- 9 + n * 6 + n ^ 2 < 3 ^ n * 27
      have : 0   n ^ 2 := by exact Nat.zero_le (n ^ 2) -- for linarith below
      have hm : 9 + n * 6 + n ^ 2 < 12 + n * 12 + n ^ 2 * 3 := by linarith
      exact lt_trans hm ih

This example also did show the use of the tactic⇒…​ pattern to (temporally) enter in tactic mode.

Now, another inequality to prove by induction:

For any natural a ≥ 4, the inequality a! > 2 ^ a holds.

The following proof builds on the previous one. It uses the Nat.factorial recursive function, which defines Nat.factorial 0 = 1 and Nat.factorial (n + 1) = n * Nat.factorial n:

example (h: a  4) : Nat.factorial a > 2 ^ a := by
  match a with
  | a + 4 =>
    clear h -- this hypothesis is already implied by (a + 4)
    induction' a with n ih
    norm_num
    -- drop Nat.succ
    change 2 ^ (n + 4) < Nat.factorial (n + 4) at ih
    -- multiply both sides of ih by (n + 5)
    have : 0 < (n + 5) := by norm_num
    replace ih := (mul_lt_mul_left 0 < (n + 5)›).mpr ih
    -- get aux inequality for transitivity:
    -- 2 * 2 ^ (n + 4) < (n + 5) * 2 ^ (n + 4)
    have : 2 < n + 5 := by linarith only [n]
    have : 0 < 2 ^ (n + 4) := by norm_num
    have haux := (mul_lt_mul_right 0 < 2 ^ (n + 4)›).mpr 2 < n + 5
    -- prepare goal
    have : 2 ^ (n + 5) = 2 * 2 ^ (n + 4) := by exact Nat.pow_succ'
    rw [this]
    -- finish
    exact lt_trans haux ih

This example also did show the usage of the explicit terms for context hypothesis using the french quotes ‹…​› (obtained with the combination \f and \f>). They allow to insert any (proved) hypothesis by writing its type (not a name as usual.)

The following version introduces an explicit change of variables with the generalize tactic, with b meaning a - 4; the goal is rewritten to allow Lean such replacement, using the fact a = (a - 4) + 4.

It also illustrates the usage of conv mode for the goal and the inductive hypothesis.

example (h: a  4) : Nat.factorial a > 2 ^ a := by
  have torw : a = a - 4 + 4 := by exact Nat.eq_add_of_sub_eq h rfl
  rw [torw]
  generalize a - 4 = b
  induction' b with n ih
  norm_num
  conv at ih =>
    tactic => have : 0 < n + 5 := by norm_num
    change 2 ^ (n + 4) < Nat.factorial (n + 4)
    rw [mul_lt_mul_left 0 < n + 5  ]
  have hq : 2 < n + 5 := by linarith
  conv at hq =>
    tactic => have : 0 < 2 ^ (n + 4) := by norm_num
    rw [mul_lt_mul_right 0 < 2 ^ (n + 4)›,
      Nat.pow_succ']
  exact lt_trans hq ih

The next version makes the change of variable in the induction' statement. It also illustrates the suffices tactic, which is used when there is an hypothesis which is enough to close the goal, but whose proof is deferred "for later". Here the proof is closed by transitivity with the help of the hm hypothesis. The later is a new goal whose proof is provided after the main goal is closed.

example (h: a  4) : Nat.factorial a > 2 ^ a := by
  have torw : a = a - 4 + 4 := by exact Nat.eq_add_of_sub_eq h rfl
  rw [torw]
  change Nat.factorial (a - 4 + 4) > 2 ^ (a - 4 + 4)
  induction' (a - 4) with n ih
  norm_num
  change 2 ^ (n + 4) < Nat.factorial (n + 4) at ih
  have hn5 : 0 < n + 5 := by norm_num
  rw [mul_lt_mul_left hn5] at ih -- (n + 5) * 2 ^ (n + 4) < (n + 5) * (n + 4)!
  change 2 ^ (n + 5) < (n + 5) * Nat.factorial (n + 4)
  suffices hm : 2 ^ (n + 5) < (n + 5) * 2 ^ (n + 4)
  exact lt_trans hm ih
  -- done, now the remaining goal
  have : 2 ^ (n + 5) = 2 * 2 ^ (n + 4) := by exact Nat.pow_succ'
  rw [this]
  have : 0 < 2 ^ (n + 4) := by norm_num
  apply (mul_lt_mul_right this).mpr
  -- remaining target is: 2 < n + 5
  linarith only [n]