A programming language is said to be Turing-complete if it is in the same computational class as a Turing machine; that is to say, that it can perform any calculation that a universal Turing machine can. This is considered significant because, under the Church-Turing thesis, a Turing machine is the embodiment of the intuitive notion of an algorithm.
Relationship to computation
One of the more common misconceptions about Turing-completeness is that it is something that is required to perform any sort of computations (and that, conversely, any system that can perform some computations is Turing-complete.)
This is not the case when talking about a restricted set of computations. A finite-state automaton, for example, can be constructed which computes the Ackermann function f(m, n) for 1 < m < 1000 and 1 < n < 1000. In fact, such an evaluation could be stored in a million-cell lookup table.
This issue is further explored in the next section.
Relationship to the physical world
Turing-completeness applies to abstract systems (such as languages and formal automata) but not to real ones (such as computers and interpreters.) Physical machines have the following properties:
- they do not have unbounded storage space; and
- they eventually break down.
For these reasons, it should be obvious that no actual computer can be Turing-complete. However, it is also obvious that physical computers are perfectly capable of performing practical, finite computations in spite of that.
By extension, implementations of languages (compilers and interpreters) on physical computers are also not Turing-complete, but are nonetheless capable of specifying computations.
Therefore, it can be useful to make a distinction between being "usable for computation" and being Turing-complete. A language can be "usable for computation" while not being Turing-complete; all Turing-complete systems are "usable for computation", however.
A formal definition of "usable for computation" is currently being sought.
Often, a programming language is "after the fact" in that its specification comes from an attempt to describe the behavior of an implementation of that language.
As has just been noted, such implementations, being based as they are on physical machines, cannot be Turing-complete. So when this is done, there is a danger that the language will be overspecified, and that limitations of the physical machine will be reflected in the description of the language. This, often needlessly, causes the language to fail to be Turing-complete.
An interesting example of this is the C programming language. The C specification requires an integer result from sizeof() operator. The sizeof() operator's result is an unsigned integer, of type size_t, for ANSI C, or an int or long for traditional C. The range of these integers is bounded. Since the sizeof() any object in a C implementation must be a finite size, then C cannot be Turing-complete, because it cannot address an unbounded storage space.
There are sometimes ways out of this. If a language specification requires that the size of some address (for example) be "implementation-dependent but no less than 16 bits" then one can argue that there exists an "implementation" where the size of addresses is unbounded, and that this "implementation" is Turing-complete. (That is, provided that the concept of an "implementation" which can never be implemented on a real computer is unproblematic.)
Relationship to communication
Turing-completeness does not imply input or output capabilities. A Turing machine might be used by explicitly encoding some input string along with the program into the tape beforehand, running the program, then examining the contents of the tape after it is finished. Thus Brainfuck, a language largely cited for its Turing-completeness, has at least two instructions extraneous to this end. A somewhat more minimal Turing-complete language is F2.
Relationship to esoteric programming
Although it is often a self-imposed constraint, there is no burden on an esoteric programming language designer to make their language Turing-complete. For instance, it seems reasonable that most esolangers would rather see a language which is interesting but not Turing-complete, over a language which is Turing-complete but uninteresting.
One good reason to not impose this constraint is that it encourages the creation of languages for which it is not known whether they are Turing-complete or not, such as Aura and Sortle. In this way, open research questions are provided, which, when solved, may help shed better light on the nature of computation.
A Turing-complete version of a specific language often becomes desirable once it is demonstrated that that language is not Turing-complete. Several languages have been altered, redesigned, or have had descendant languages created for this reason, such as Befunge-93 and Argh!, which originally included limitations that were removed in Funge-98 and Aargh!, respectively.