Combinatory logic
- Not to be confused with Combinational logic.
Combinatory logic is a model by which logical statements can be described as expressions made up of a small number of primitive elements called combinators. Each combinator is an operator which can take in other combinators and expressions and replace itself and its arguments with a new expression. A "sentence" in combinatory logic is called a combinatory term, and is composed of combinators, free variables, and applications of one combinatory term to another. Combinatory logic is a functional calculus, and can be expressed in terms of lambda calculus.
Combinatory logic can capture the meaning of any arithmetic or logical statement (and by extension, any non-interactive computer program), making it a Turing-complete computational model. It is often described and/or implemented in terms of rewriting (of graphs, trees, or terms).
Combinatory logic forces you to use pointfree programming style to write useful programs.
Syntax
The syntax commonly used for combinators is like that of the lambda calculus: each combinator only takes one argument (but may yield another combinator which will consume further arguments), application associates to the left, and parentheses are only used for disambiguation, not to delimit an argument list.
If you wish to see the grammar of pure combinatory logic, it is as such:
combinator ::= Combinator_Symbol ; app ::= exp exp ; parenexp ::= "(" exp ")" ; expression ::= app | parenexp | combinator ;
This is a fairly simple grammar, so it will not be explained here.
The syntax of the commonly-used combinatory logic (with assignments and such) is:
combinator ::= Combinator_Symbol ; var ::= Var_Symbol ; app ::= exp exp ; parenexp ::= "(" exp ")" ; expression ::= app | parenexp | combinator ; assignment ::= combinator {var} "=" expression {var} ; line ::= expression | assignment ;
Examples
The simplest combinator takes a single argument and produces it unchanged. It is traditionally called I
and defined as:
I x = x;
Combinators can also ignore arguments. The K
combinator accepts two arguments and discards one of them:
K x y = x;
We can also have combinators which apply some arguments to other arguments. An important example is the S
combinator, which takes three arguments and applies them to each other:
S x y z = x z (y z);
Example reduction
Let's trace through S K K x
, both to show that it is equivalent to I x
, and just as an example of how combinatory logic works. This equivalence is extensional equivalence; it is true regardless of the value of x
or any surrounding context.
Theorem. S K K
is extensionally equivalent to I
.
To prove this, we apply S K K
to a placeholder variable x
and use equational reasoning. We'll apply the definition of S
and K
:
S K K x = K x (K x) = x
But this is precisely the definition of I
! So they are equivalent when applied to any value.
Properties
Any combinator can be written with extraneous arguments. For example, we could write I
to take three arguments, even though it only takes one:
I x = x; I x y z = x y z;
The rank of a combinator is the minimum number of arguments that it must be written with. By convention, the rank of I
is one, not zero. Other ranks can be computed by direct inspection. For example, S
is written with three arguments, so its rank is three, and the rank of K
is two. The rank of a combinator is an invariant; two combinators with different rank must have different extents and thus cannot be equivalent.
A combinator is cancellative if it does not use all of its arguments. Common examples of cancellative combinators include K
and C K
:
K x y = x; C K x y = K y x = y;
A combinator is duplicative if it uses an argument more than once. Common examples of duplicative combinators include L
and W
:
L x y = x (y y); W x y = x y y;
In contrast, a combinator is linear if it uses every argument exactly once. Examples include I
and C
:
C x y z = x z y;
A combinator is affine if it uses every argument at most once. Expanding the definitions and counting how many times each argument can be used, we immediately have that:
Theorem. Every linear combinator is affine.
Theorem. No combinator is affine and duplicative.
Finally, a combinator is applicative if it applies one argument to another. This is a fairly common property, but notably neither K
nor I
qualifies. We also consider combinators like M
to be applicative:
M x = x x;
Systems
The study of combinatory logic has focused on bases and systems. A basis is a partially-ordered set of combinators and a system is the collection of all combinators that can be generated from a given basis.
SKI calculus
In 1924, Moses Schönfinkel discovered and documented the three traditional primitive combinators.[1][2]
I
, the identity combinator. Applied to some combinator x, it simply evaluates to x:
I x = x;
K
, the constant-making combinator. Applied to some combinator x, it yields a combinator that always evaluates to x, no matter what it is applied to:
K x y = x;
S
, the application or "fusion" combinator. Applied to three combinators, it applies each of the first two combinators to the last, then applies the first result of that to the second:
S f g x = f x (g x);
These combinators capture the essence of the primitive operations of lambda calculus in the following sense:
Theorem. Every term of lambda calculus can be represented by a composite of S
, K
, and I
combinators. (Curry, 1930)
The representation can be computed by recursion on the following abstraction scheme. For any constant, let it be represented as a constant lambda term, and convert it to K
:
λx.c → K c
Then all other terms are either identity terms:
λx.x → I
Or eta-equivalent to constant terms:
λx.fx → f
Or, in the trickiest case, they are applications which can be put under S
:
λx.yz → S (λx.y) (λx.z)
As a result, the SKI basis is complete; any other combinator can be expressed in terms of this basis by writing down a lambda-calculus term that computes it and then abstracting that term. Additionally, this representation can be computed in reverse, sending an expression of S, K, I, and constants into a lambda expression, so this translation is a faithful representation of the untyped lambda calculus.
Theorem (Turing-completeness for SKI). It is undecidable whether a combinator in the SKI basis has a normal form. (Schönfinkel, 1924)
For a listing of proofs of Turing-completeness of the SKI basis, see S and K Turing-completeness proof. Since it only has three combinators, the SKI basis is often considered to be a Turing tarpit.
In fact, only two different combinators are strictly necessary; the I
combinator can be represented as S K K
, as we saw above. Several other possibilities exist, like S K S
. As a result:
Theorem. Every term of lambda calculus can be represented by a composite of S
and K
combinators.
Theorem (Turing-completeness for SK). It is undecidable whether a combinator in the SK basis has a normal form. (Schönfinkel, 1924)
Curry's 1928 calculus
In 1928, Curry incorporated Schönfinkel's work into their own inquiries into axiomatic logic. Curry borrowed K
from Schönfinkel but was initially not convinced of S
and instead used I
, B
, C
, and W
.
BCKW calculus
The 1930 system of four primitive combinators used by Haskell Curry contains B, C, K, and W. B, C, and W are defined by:
B x y z = x (y z); C x y z = x z y; W x y = x y y;
To define the B, C, K, W primitives from the SKI system, you can use the expressions:
B = S (K S) K; C = S (B B S) (K K); W = S S (S K);
Derivations to prove these (follow them by eta conversions):
S (K S) K x y z = K S x (K x) y z = S (K x) y z = K x z (y z) = x (y z) = B x y z; S (B B S) (K K) x y z = B B S x (K K x) y z = B B S x K y z = B (S x) K y z = S x (K y) z = x z (K y z) = x z y = C x y z; S S (S K) x y = S x (S K x) y = x y (S K x y) = x y (K y (X y)) = x y y = W x y;
To define the S and I primitives from the BCKW system (two alternative definitions are given for S), use:
S = B (B (B W) C) (B B) = B (B W) (B B C); I = W K;
Derivations to prove these (follow them by eta conversions):
B (B W) (B B C) x y z = B W (B B C x) y z = W (B B C x y) z = B B C x y z z = B (C x) y z z = C x (y z) z = x z (y z) = S x y z; B (B (B W) C) (B B) x y z = B (B W) C (B B x) y z = B W (C (B B x)) y z = W (C (B B x) y) z = C (B B x) y z z = B B x z y z = B (x z) y z = x z (y z) = S x y z; I y = W K y = K y y = y;
This interoperability ensures that BCKW is also Turing-complete:
Theorem (Turing-completeness for BCKW). It is undecidable whether a combinator in the BCKW basis has a normal form.
Rosser's 1935 calculus
In 1935, J. Barkley Rosser described the IJ basis; that is, the basis including I
and Rosser's combinator J
:
J x y z w = x y (x w z);
Theorem. A combinator belongs to the IJ system if and only if it is not cancellative. (Rosser, 1935)[3]
J
is the first of many purpose-crafted combinators designed to fulfill a specific purpose. Church's deductive system — the inconsistent one, not the one that would later become lambda calculus — can be structurally summarized with the combinators B
, T
, M
, and I
; J
includes them as special cases.
BCI calculus
Rosser then studied the BCI basis. This basis has neither cancellation nor duplication. Nonetheless, it contains several useful combinators, like T
:
T x y = y x;
Theorem. BCI is equivalent to BTI. (Rosser)
Later, Smullyan used G
to show that BCI has a two-combinator basis:
G x y z w = x w (y z);
Theorem. BCI is equivalent to GI. (Smullyan)
Completeness
One of the original lines of inquiry in combinatory logic is whether a particular basis includes a given combinator. An easy answer starts with a complete basis, which contains all combinators; its system is the complete system. As mentioned above, examples of complete bases include SK and BCKW.
There are some requirements for such a basis. First:
Theorem (minimum rank). A complete basis must have at least one combinator of rank three or more. (Legrand 1988)[4][5]
The reader can verify that B, C, and S all have rank three. Next:
Theorem. If a basis has no cancellative combinators then its system does not include K.
This folklore theorem excludes the basis BCSI from completeness. However, BCSI contains all non-cancellative combinators; Smullyan calls this the aristocratic system. Other equivalent bases for the aristocratic system include Church's basis BTMI and Rosser's basis IJ. For this reason, complete bases include K
or a cousin. Next:
Theorem. If every combinator of a basis is affine then its system does not include any duplicative combinator.
The proof is fairly generic and quick. Reduction of any particular affine combinator cannot increase the number of uses of any variable; indeed, the number of total variables used must either stay the same or decrease. In a chain of reductions, that number must stay the same or decrease because the less-than-or-equal relation is transitive regardless of how we choose the next redex. So no chain of reductions of affine combinators can produce the duplicative effect.
This leads to the affine system. An example basis is BCKI, although BTKI and GKI also work. This system shows up in certain corners of computer science. More commonly, we can repeat the reasoning again with linear combinators:
Theorem. If every combinator of a basis is linear then its system does not include any duplicative combinator.
Theorem. If every combinator of a basis is linear then its system does not include any cancellative combinator.
This yields the linear system, most commonly given by BCI, but also by BTI or GI. Many systems are unable to destroy or create resources, and their algebras therefore tend to be linear; additionally, they often can be described with BCI. For example, linear logic has a fragment corresponding to BCI, and free symmetric monoidal categories are described by BCI as well.
Table of combinators
The following table lists many of the well-studied combinators. For each combinator, a definition in terms of other combinators is given, along with its rank and any notes. For any combinator studied by Smullyan, the bird name is listed as well.[6]
Combinator | Definition | Bird name | Rank | Notes |
---|---|---|---|---|
B | S(KS)K | Bluebird | 3 | applicative |
B[1] | BBB | Blackbird | ||
B[2] | B(BBB)B | Bunting | ||
B[3] | B(BB)B | Becard | ||
C | S(BBS)(KK) | Cardinal | 3 | applicative |
D | BB | Dove | ||
D[1] | B(BB) | Dickcissel | ||
D[2] | BB(BB) | Dovekie | ||
E | B(BBB) | Eagle | ||
Ê | B(BBB)(B(BBB)) | Bald eagle | ||
F | ETTET | Finch | ||
G | BBC | Goldfinch | 4 | applicative |
H | BW(BC) | Hummingbird | ||
I | SKK | Identity bird | linear | |
J | B(BC)(W(BC(B(BBB)))) | Jay bird | 4 | applicative, duplicative |
K | K x y = x | Kestrel | 2 | cancellative |
L | CBM | Lark | 2 | applicative, duplicative |
M | SII | Mockingbird | 1 | applicative, duplicative |
M[2] | BM | Double mockingbird | ||
O | SI | Owl | ||
Q | CB | Queer bird | ||
Q[1] | BCB | Quixotic bird | ||
Q[2] | C(BCB) | Quizzical bird | ||
Q[3] | BT | Quirky bird | ||
Q[4] | F*B | Quacky bird | ||
R | BTT | Robin | ||
S | S x y z = x z (y z) | Starling | 3 | applicative, duplicative |
T | CI | Thrush | 2 | applicative |
U | LO | Turing bird | ||
V | BCT | Vireo | ||
W | W x y = x y y | Warbler | 2 | applicative, duplicative |
W[1] | CW | Converse warbler | ||
Y | SLL | Sage bird | ||
I* | S(SK) | Identity bird once removed | ||
W* | BW | Warbler once removed | ||
C* | BC | Cardinal once removed | ||
R* | C*C* | Robin once removed | ||
F* | BC*R* | Finch once removed | ||
V* | C*F* | Vireo once removed | ||
W** | B(BW) | Warbler twice removed | ||
C** | BC* | Cardinal twice removed | ||
R** | BR* | Robin twice removed | ||
F** | BF* | Finch twice removed | ||
V** | BV* | Vireo twice removed | ||
ω | MM | OMEGA BIRD xD [unsourced] | ||
θ | YO | Theta (Bird?) [unsourced] | ||
ι | S (S I (K S)) (K K) | - | 1 | Iota |
References
- ↑ M. Schönfinkel. Über die Bausteine der mathematischen Logik. Mathematische Annalen, 92:305–316, 1924.https://www.cip.ifi.lmu.de/~langeh/test/1924%20-%20Schoenfinkel%20-%20Ueber%20die%20Bausteine%20der%20mathematischen%20Logik.pdf
- ↑ J. P. Seldin, 2008. The Logic of Curry and Church. https://people.uleth.ca/~jonathan.seldin/CCL.pdf
- ↑ J. B. Rosser, 1935. A mathematical logic without variables. Dissertation, Princeton University.
- ↑ R. Legrand, “A basis result in combinatory logic,” The Journal of Symbolic Logic, vol. 53, no. 4, pp. 1224–1226, 1988. doi:10.2307/2274616
- ↑ J. Tromp, P. L. Lumsdaine, 2022. Do combinatory logic bases need a function of 3 variables? https://mathoverflow.net/q/415373
- ↑ R. Smullyan, 1982. To mock a mockingbird; and other logic puzzles including an amazing adventure in combinatory logic.
See also
- Unlambda
- Lazy K
- Iota
- normalcalc
- Probabilistic combinatory logic
- User:Hppavilion1/Combinatory temporal logic
External resources
- Online course in combinatory logic using Scheme (from the Wayback Machine; retrieved on 13 October 2004)
- Wikipedia:Combinatory logic
- Wikipedia:SKI combinator calculus
- Wikipedia:B, C, K, W system
- Slides (2002) by Samson Abramsky, explaining some things about the BCKW system and affine types or whatnot