Digital Miracle is yet another theorem-proving esoteric programming language invented by User:Hakerh400. This axiomatic system is a mixture of combinatory lambda calculus and inductive Peano arithmetic. Natural numbers are represented as Church numerals.
There are axioms. Each axiom can be understood as a built-in blackbox function. The type of the function corresponds to the statement that the axiom asserts.
A theorem is a function. It has a type signature and an implementation. Type signature is the proposition that the theorem aims to prove. The implementation is a combination of axioms and/or other theorems defined so far.
A function from type
A to type
B corresponds to an implication from proposition
A to proposition
B. Function application corresponds to Modus Ponens.
A combinator represents a value in the combinatory logic. It can be understood as a function that takes an input and produces an output. Combinators are not theorems. Theorems are meta-functions that operate on propositions. Combinators are just type-level values that theorems can analyze.
There are three combinator constructors:
||The K combinator from the SKI calculus|
||The S combinator from the SKI calculus|
||Takes two combinators as arguments and produces a combinator that represents their application|
Call constructor can also be used in infix notation as
(.) operator. For example, if we want to call
S with argument
K, we can write either
Call S K, or
S . K. We cannot just write
We introduce some additional combinators and their aliases:
||The identity function (Haskell equivalent: |
||The composition operator (Haskell equivalent: |
||Flip arguments (Haskell equivalent: |
||Church numeral 0|
||Successor of a Church numeral|
Natural numbers are represented as Church numerals. We have three axioms:
||Zero is a natural number|
||Successor of a natural number is also a natural number|
||If two values are equal and the first one is a natural number,|
then the second one is also a natural number
Note: the dot after
forall arguments is not the
Call operator, but just a separator between arguments and the statement. For example:
forall n. Nat n -> Nat (Suc . n) ^ ^ Separator Call
Finally, we have axioms regarding equality:
||Equality is reflexive|
||Equality is commutative|
||Equality is transitive|
||Function and argument congruence|
||Definition of K combinator|
||Definition of S combinator|
||Contradiction elimination. It says that if we manage to prove|
||Induction over natural numbers|
Each theorem has a name, a type signature (proposition) and an implementation (proof). Recursion is strictly forbidden. A theorem cannot reference itself, or other theorems that are defined after it.
First, let's prove that . It is similar to transitivity of equality, but some things are flipped. We can write a theorem for it:
tr1 :: forall a b c. a == b -> a == c -> c == b tr1 ab ac = Tran (Com ac) ab
Explanation: The theorem takes two arguments. The first argument
ab is a proof that and the second argument
ac is a proof that . When we apply axiom
ac, we get a proof that . Then we apply axiom
Tran to and to obtain . ∎
We can also write the following theorem:
tr2 :: forall a b c. a == b -> b == c -> a == c tr2 = Tran
It is equivalent to axiom
Tran. Now, we can use these two theorems in infix notation. For example,
tr2 P Q can be written as
P `tr2` Q. They are left-associative when used in infix notation. Theorem
tr1 modifies the left-hand side, while theorem
tr2 modifies the right-hand side. We can use them to sequentially construct the statement we need based on the set of proofs we have so far.
Next, we can prove argument congruence and functional congruence:
acong :: forall f a b. a == b -> f . a == f . b acong = Cong Refl
acong states that if two combinators
b are equal, we can call any combinator
b and the results will be the same. We proved it by applying axiom
Cong to axiom
Refl. Type unifier does the magic.
And for functions:
fcong :: forall f g a. f == g -> f . a == g . a fcong fg = Cong fg Refl fcong2 :: forall f g a b. f == g -> f . a . b == g . a . b fcong2 fg = fcong (fcong fg)
Now, something more interesting. Let's prove that . That is, the identity combinator returns the argument unchanged. Here is the theorem:
id :: forall a. I . a == a id = Refl `tr2` SDef `tr2` KDef
Explanation: We start from
Refl. Then we apply axiom
SDef to the right-hand side, and then we apply axiom
KDef to the right-hand side. How does it actually work? First, note that
I is just an alias for
S . K . K. We start from
Refl, so we have
S . K . K . a == S . K . K . a. Then we apply
SDef to the RHS (right-hand side) obtaining
S . K . K . a == K . a . (K . a). Finally we apply
KDef to the RHS, obtaining
S . K . K . a == a. Since
S . K . K, it is equivalent to
I . a = a. ∎
The next thing we want to prove is related to the
D combinator. We want to prove that . Here is the theorem:
dot :: forall a b c. D . a . b . c == a . (b . c) dot = let dot_a = Refl `tr2` SDef `tr2` fcong KDef :: D . a == S . (K . a) in fcong2 dot_a `tr2` SDef `tr2` fcong KDef
Explanation: We use
let statement to make proof more readable. We first prove
dot_a, which states that . Then we use
dot_a to prove the final statement. ∎
F combinator: . Theorem:
flip :: forall a b c. F . a . b . c == a . c . b flip = let flip_a = Refl `tr2` SDef `tr2` fcong dot `tr2` acong KDef :: F . a == D . (S . a) . K flip_ab = fcong flip_a `tr2` dot :: F . a . b == S . a . (K . b) in fcong flip_ab `tr2` SDef `tr2` acong KDef
Explanation: Pretty straightforward, isn't it?
And one more thing at the end. Let's define a new combinator alias:
type FI = F . I
and prove the following theorem:
flip_id :: forall a b. FI . a . b == b . a flip_id = Refl `tr2` flip `tr2` fcong id
We can define any alias we want. However, like theorems, aliases cannot be recursive.
This axiomatic system is probably as strong as the first-order Peano arithmetic. It cannot reason about uncomputable functions, and since the set of all computable functions is countable, it cannot represent real numbers or transfinite cardinals. It is weaker than Zermelo-Fraenkel set theory, and probably even weaker than higher-order logic.
Regarding the consistency, it seems to be consistent, unless some silly mistake has been made in the axiomatization.
There are combinators that we can prove almost nothing about them, such as
S . I . I (S . I . I). However, we can construct inductive datatypes using the induction schema for natural numbers (to be precise, that's actually not an axiom schema, but just a single axiom, because it does not involve a meta-level predicate parameter). Once we construct datatypes, we can write theorems that prove certain statements as long as some combinators belong to a datatype. We can explicitly pass around proofs of datatype inhabitation. It would be more verbose than in typed higher-order logic (where types are implicit), but it would allow more freedom regarding combinator applications.