Digital Miracle
Digital Miracle is yet another theoremproving esoteric programming language invented by User:Hakerh400. This axiomatic system is a mixture of combinatory lambda calculus and inductive Peano arithmetic. Natural numbers are represented as Church numerals.
Overview
There are axioms. Each axiom can be understood as a builtin blackbox function. The type of the function corresponds to the statement that the axiom asserts.
A theorem is a function. It has a type signature and an implementation. Type signature is the proposition that the theorem aims to prove. The implementation is a combination of axioms and/or other theorems defined so far.
A function from type A
to type B
corresponds to an implication from proposition A
to proposition B
. Function application corresponds to Modus Ponens.
Combinators
A combinator represents a value in the combinatory logic. It can be understood as a function that takes an input and produces an output. Combinators are not theorems. Theorems are metafunctions that operate on propositions. Combinators are just typelevel values that theorems can analyze.
There are three combinator constructors:
Name  Description 

K

The K combinator from the SKI calculus 
S

The S combinator from the SKI calculus 
Call

Takes two combinators as arguments and produces a combinator that represents their application 
The Call
constructor can also be used in infix notation as (.)
operator. For example, if we want to call S
with argument K
, we can write either Call S K
, or S . K
. We cannot just write S K
.
Aliases
We introduce some additional combinators and their aliases:
Name  Value  Description 

I

S . K . K

The identity function (Haskell equivalent: id )

D

S . (K . S) . K

The composition operator (Haskell equivalent: (.) )

F

S . (D . D . S) . (K . K)

Flip arguments (Haskell equivalent: flip )

Zero

K . I

Church numeral 0 
Suc

S . D

Successor of a Church numeral 
Natural numbers
Natural numbers are represented as Church numerals. We have three axioms:
Axiom name  Statement  Description 

NatZero

Nat Zero

Zero is a natural number 
NatSuc

forall n. Nat n > Nat (Suc . n)

Successor of a natural number is also a natural number 
NatTran

forall n m. n == m > Nat n > Nat m

If two values are equal and the first one is a natural number, then the second one is also a natural number 
Note: the dot after forall
arguments is not the Call
operator, but just a separator between arguments and the statement. For example:
forall n. Nat n > Nat (Suc . n) ^ ^ Separator Call
Equality
Finally, we have axioms regarding equality:
Axiom name  Statement  Description 

Refl

forall a. a == a

Equality is reflexive 
Com

forall a b. a == b > b == a

Equality is commutative 
Tran

forall a b c. a == b > b == c > a == c

Equality is transitive 
Cong

forall f g a b. f == g > a == b > f . a == g . b

Function and argument congruence 
Ext

forall f g. (forall a. f . a == g . a) > f == g

Extensionality 
KDef

forall a b. K . a . b == a

Definition of K combinator 
SDef

forall a b c. S . a . b . c == a . c . (b . c)

Definition of S combinator 
Cont

forall a b. K == S > a == b

Contradiction elimination. It says that if we manage to prove that K and S are the same, we can prove any equality

Induct

forall f r n. f Zero == r

Induction over natural numbers 
Syntax
Each theorem has a name, a type signature (proposition) and an implementation (proof). Recursion is strictly forbidden. A theorem cannot reference itself, or other theorems that are defined after it.
Examples
First, let's prove that . It is similar to transitivity of equality, but some things are flipped. We can write a theorem for it:
tr1 :: forall a b c. a == b > a == c > c == b tr1 ab ac = Tran (Com ac) ab
Explanation: The theorem takes two arguments. The first argument ab
is a proof that and the second argument ac
is a proof that . When we apply axiom Com
to ac
, we get a proof that . Then we apply axiom Tran
to and to obtain . ∎
We can also write the following theorem:
tr2 :: forall a b c. a == b > b == c > a == c tr2 = Tran
It is equivalent to axiom Tran
. Now, we can use these two theorems in infix notation. For example, tr2 P Q
can be written as P `tr2` Q
. They are leftassociative when used in infix notation. Theorem tr1
modifies the lefthand side, while theorem tr2
modifies the righthand side. We can use them to sequentially construct the statement we need based on the set of proofs we have so far.
Next, we can prove argument congruence and functional congruence:
acong :: forall f a b. a == b > f . a == f . b acong = Cong Refl
Explanation: Theorem acong
states that if two combinators a
and b
are equal, we can call any combinator f
with a
and b
and the results will be the same. We proved it by applying axiom Cong
to axiom Refl
. Type unifier does the magic.
And for functions:
fcong :: forall f g a. f == g > f . a == g . a fcong fg = Cong fg Refl fcong2 :: forall f g a b. f == g > f . a . b == g . a . b fcong2 fg = fcong (fcong fg)
Now, something more interesting. Let's prove that . That is, the identity combinator returns the argument unchanged. Here is the theorem:
id :: forall a. I . a == a id = Refl `tr2` SDef `tr2` KDef
Explanation: We start from Refl
. Then we apply axiom SDef
to the righthand side, and then we apply axiom KDef
to the righthand side. How does it actually work? First, note that I
is just an alias for S . K . K
. We start from Refl
, so we have S . K . K . a == S . K . K . a
. Then we apply SDef
to the RHS (righthand side) obtaining S . K . K . a == K . a . (K . a)
. Finally we apply KDef
to the RHS, obtaining S . K . K . a == a
. Since I
is S . K . K
, it is equivalent to I . a = a
. ∎
The next thing we want to prove is related to the D
combinator. We want to prove that . Here is the theorem:
dot :: forall a b c. D . a . b . c == a . (b . c) dot = let dot_a = Refl `tr2` SDef `tr2` fcong KDef :: D . a == S . (K . a) in fcong2 dot_a `tr2` SDef `tr2` fcong KDef
Explanation: We use let
statement to make proof more readable. We first prove dot_a
, which states that . Then we use dot_a
to prove the final statement. ∎
Next, the F
combinator: . Theorem:
flip :: forall a b c. F . a . b . c == a . c . b flip = let flip_a = Refl `tr2` SDef `tr2` fcong dot `tr2` acong KDef :: F . a == D . (S . a) . K flip_ab = fcong flip_a `tr2` dot :: F . a . b == S . a . (K . b) in fcong flip_ab `tr2` SDef `tr2` acong KDef
Explanation: Pretty straightforward, isn't it?
And one more thing at the end. Let's define a new combinator alias:
type FI = F . I
and prove the following theorem:
flip_id :: forall a b. FI . a . b == b . a flip_id = Refl `tr2` flip `tr2` fcong id
We can define any alias we want. However, like theorems, aliases cannot be recursive.
Further research
This axiomatic system is probably as strong as the firstorder Peano arithmetic. It cannot reason about uncomputable functions, and since the set of all computable functions is countable, it cannot represent real numbers or transfinite cardinals. It is weaker than ZermeloFraenkel set theory, and probably even weaker than higherorder logic.
Regarding the consistency, it seems to be consistent, unless some silly mistake has been made in the axiomatization.
There are combinators that we can prove almost nothing about them, such as S . I . I (S . I . I)
. However, we can construct inductive datatypes using the induction schema for natural numbers (to be precise, that's actually not an axiom schema, but just a single axiom, because it does not involve a metalevel predicate parameter). Once we construct datatypes, we can write theorems that prove certain statements as long as some combinators belong to a datatype. We can explicitly pass around proofs of datatype inhabitation. It would be more verbose than in typed higherorder logic (where types are implicit), but it would allow more freedom regarding combinator applications.
Implementation
Implementation in Haskell  It links to a specific commit to make the link permanent. Checkout the master branch for more recent updates if there are any.