A Turing tarpit is a language that aims for Turing-completeness in an arbitrarily small number of linguistic elements - ideally, as few as possible.
The 1950's and 1960's were something of a heyday for Turing tarpits. Academic interest in them was keen, and they enjoyed considerable popularity in computability research circles.
Near the middle of the 1960's, however, the bubble of optimism surrounding them burst. In his 1987 paper The Busy Beaver Game and the Meaning of Life, Allen Brady relates a private conversation with John McCarthy (inventor of LISP) circa 1964 in which Prof. McCarthy states:
- We thought if we were to find the smallest universal machine then we could learn a great deal about computability -- of course that wouldn't be so!
In the final chapter, Very Simple Bases for Computability, of his 1967 book Computation: Finite and Infinite Machines, Marvin Minsky writes:
- The reader is welcome to enter the competition [to design the smallest universal Turing machine ...] although the reader should understand clearly that the question is an intensely tricky puzzle and has essentially no serious mathematical interest.
Although enthusiasm in them has declined, Turing tarpits do remain valuable as an academic tool. They can be pedagogically useful in illustrating various aspects of computability theory and the principles of programming languages (e.g. Böhm's language P'' played a role in seminal studies of structured programming). They also have applications in algorithmic complexity theory, in which systems such as binary combinatory logic have been employed.
The pursuit of Turing tarpits in the esoteric programming language community, however, seems to be essentially for its own sake. Whether this pursuit is regarded by its practitioners as a pastime, an art form, informal research, or something else entirely - it seems safe to say that they must enjoy the "intensely tricky puzzle" to which Minsky referred.
The term "Turing tarpit" itself comes from the 1982 paper "Epigrams on Programming" by Alan Perlis:
- Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.
In order to find an arbitrarily simple machine, one must define "simplicity" - i.e. some metric that must be minimized. This metric must be based on some measurable property of the language, and it usually comes down to counting certain linguistic elements. There are several linguistic elements to pick from, and the borders between them are not always crisp, making this definition a challenge.
For universal Turing machines, the product between states and symbols is often used. Claude Shannon showed that either the number of states or the number of symbols can be reduced to as small as 2 -- but only at the cost of increasing the other. Minsky and Bobrow showed (through exhaustion) that no Turing machine with only 2 states and 2 symbols is universal.
States are not always easy or natural to measure in a programming language, though, and for imperative languages, instructions can be used instead. Functional and rewriting languages might count reductions. We can use the (slightly vague) term operation to refer to any of these, but it should be understood that these are not generally comparable measures.
Here are rough metrics on some mostly-esoteric programming languages which are Turing tarpits:
- Binary combinatory logic: 2 operations, 2 symbols
- BitChanger: 4 instructions, 4 symbols
- Bitwise Cyclic Tag: 2 instructions, 2 symbols
- Brainfuck: 8 instructions, 8 symbols
- iag: 3 instructions, >3 operations, 3 symbols
- Iota: 2 operations, 2 symbols
- Jot: 3 operations, 2 symbols
- OISC: 1 instruction, 3 symbols (signed unary, including separator)
- Thue: 1 operation, 4 or 5 symbols (minimum)
- Wierd: 6 operations, 2 symbols