Computational class

From Esolang
Jump to navigation Jump to search

The computational class of a language describes where it stands in the set of known computational models. Some of these models are known to be able to solve the exact same set of problems as other models, while some models are known to be able to solve more problems than other models. This immediately suggests a partially-preordered set divided into equivalence classes based on their computational power (how many problems they can solve).

The relative capabilities of these models in this set are sometimes studied under complexity theory, especially in the "real" world, where performance is an issue. The world of esoteric programming, however, is generally more concerned with pure computability theory - what kinds of problems a system can solve at all, rather than how efficiently it can be done.

Turing-completeness

When considering at least somewhat realistically implementable languages, the 'top' of this partial preorder is the Turing machine, which (under the Church-Turing thesis) can implement any specifiable algorithm; other systems which can be shown to be equivalent to it (the lambda calculus, combinatory logic, Post canonical systems, Minsky machines, Kolmogorov machines, many programming languages, etc.) are termed Turing-complete.

Descending from this point are many different models of less computational power (i.e. that can be shown to be unable to solve as many problems as a Turing-complete system can). A sorely incomplete list follows:

Ascending or deviating from this point are models that belong to Category:Uncomputable, they cannot be implemented on Turing machines and therefore, assuming the Church-Turing thesis, not by any realistic means.

Proofs of computational class

A proof of an esoteric programming language's computational class is often a useful thing to have. This wiki tries to present as many as possible in the articles for individual languages. Note however that these are often presented as (more readable and less formal) sketches that rely on some intuition to fill out the details, rather than full-out formal proofs.

There are a couple of ways to construct such a proof.

Say you have a language K whose computational class is known. (A common value for K is Brainfuck, which is known to be Turing-complete.) You also have a language Q whose computational class is in question, and you want to show that it is computationally equivalent to K.

Reduction

If it can be effectively argued that every language feature f of K has an equivalent language feature g in Q, then Q can solve at least as many problems as K can.

Note that there are tricky bits here if the language features cannot be "employed arbitrarily" - that is, if it is not possible to achieve an arbitrary effect at an arbitrary point in it. This concept is perhaps a little difficult to define, but a good idea of what it means can be obtained from examples such as Malbolge and Aura.

This is often called an isomorphism; however, such use is inaccurate (an isomorphism would require a mapping from every feature of Q to one in K, rather than just the other way round).

Translation

If a compiler which can translate any K program to a Q program with the same semantics is constructed (in any programming language) then Q can solve at least as many problems as K can.

Note that this is really just an automated version of the argument by reduction.

Simulation

If an interpreter for K can be implemented in Q, then Q can solve at least as many problems as K can. However, see for potential difficulties with this form of proof.

Note on converses

Note that the converse of each of these only tells you that K can solve at least as many problems which Q can. So, if K is already known to be e.g. Turing-complete, this alone tells you very little. However, if this is shown along with a proof that Q solves at least as many problems as K, both of these together prove that K and Q share the exact same computational class.

Note on algorithms

It is tempting to think that, because some algorithm requires a certain class of computational model for its realization, that an implementation of that algorithm in some system proves that that system is of that computational class. For example, recognizing the string xnyn requires a push-down automaton, and so constructing a recognizer of such strings in some system "shows" that that system is as powerful as any push-down automaton.

This is not a sufficient proof, however, since it does not show that the system allows an arbitrary effect at an arbitrary point. A programming language in which the Ackermann function is implemented, for example, might (arbitrarily) disallow any function which is not the Ackermann function from being implemented, making it unable to solve many problems of which a Turing machine is capable - and thus not Turing-complete.

Versus complexity

Computational classes relate the power of computational systems to one another. Complexity classes relate the resources required by algorithms which solve problems on inputs. The two concepts are separate but related. They are further related to other hierarchies of sets and functions, like the arithmetical, analytical, polynomial, and so on.

On the wiki, the computational classes used are those of the Chomsky hierarchy, which is defined in terms of formal grammars and the languages they generate. Languages generated by certain kinds of grammars are recognizable by certain kinds of automata. In fact, the possible languages that a certain type of automata can recognize will be generated by a certain kind of grammar. The classes used are not precisely those of the Chomsky hierarchy, at least not likely, since the automata which recognize languages are nondeterministic.

Certain space complexity classes correspond to these levels. DSPACE(f(n)) refers to the set of problems solvable by a determinstic Turing machine whose tape size is bounded asymptotically by f(n). If restricted to one directional input (like that in brainfuck) then these are the correspondances for the typical computational classes:

  • no space, these machines are memoryless, combinational logic
  • constant space, these machines have a finite amount of storage that does not depend on the size of the input, they are equivalent in power to finite state automata
  • for some k where 1 ≤ k ≤ 2, (quadratic) polylogarithmic space, pushdown automata[1][fn 1]
  • linear space, these machines have storage which scales in correspondence with the size of the input, linear bounded automata are defined in this way
  • or n/a; unbounded space, these machines are able to use as much space as they require, Turing machines

There are many other size classes between most of these; between FSMs and pushdown automata is DLOGSPACE (logarithmic space) , between pushdown automata and linear bounded automata is deterministic space, between linear bounded automata and Turing machines is DPSPACE (polynomial space) (where k ≥ 1).

The time complexity classes correspond less neatly with the common deterministic automata.

These correspondences are interesting, and perhaps useful, but when trying to determine the computational class of a system (like an esolang) it's typically better to think of what:

  • Languages the system can recognize
  • Problems it can solve
  • Functions it can calculate
  • What other systems it can simulate

The last point is especially important, since Turing complete systems can always simulate other Turing complete systems. The first one is useful for determining the class of languages which are based on parsing. For instance, a language based on generating parsers from unambiguous Backus-Naur form with no additional capabilities would be equivalent to the class of pushdown automata.

Footnotes

  1. This is a synthesis of the result that all pushdown automata recognize a context free grammar, Theorem 4 by Lewis et al., that all context free grammars are recognizable by a Turing machine with tape bounded by , and the paper's conjecture that space is insufficient to recognize all context free grammars.

References

  1. Lewis, Philip M., Richard Edwin Stearns, and Juris Hartmanis. "Memory bounds for recognition of context-free and context-sensitive languages." 6th Annual Symposium on Switching Circuit Theory and Logical Design (SWCT 1965). IEEE, 1965. https://doi.org/10.1109/FOCS.1965.14