The computational class of a language describes where it stands in the set of known computational models. Some of these models are known to be able to solve the exact same set of problems as other models, while some models are known to be able to solve more problems than other models. This immediately suggests a partially-ordered set divided into equivalence classes based on their computational power (how many problems they can solve).
The relative capabilities of these models in this set are sometimes studied under complexity theory, especially in the "real" world, where performance is an issue. The world of esoteric programming, however, is generally more concerned with pure computability theory - what kinds of problems a system can solve at all, rather than how efficiently it can be done.
When considering at least somewhat realistically implementable languages, the 'top' of this partial order is the Turing machine, which (under the Church-Turing thesis) can implement any specifiable algorithm; other systems which can be shown to be equivalent to it (the lambda calculus, combinatory logic, Post canonical systems, Minsky machines, Kolmogorov machines, many programming languages, etc.) are termed Turing-complete.
Descending from this point are many different models of less computational power (i.e. that can be shown to be unable to solve as many problems as a Turing-complete system can). A sorely incomplete list follows:
Ascending or deviating from this point are models that belong to Category:Uncomputable, they cannot be implemented on Turing machines and therefore, assuming the Church-Turing thesis, not by any realistic means.
Proofs of computational class
A proof of an esoteric programming language's computational class is often a useful thing to have. This wiki tries to present as many as possible in the articles for individual languages. Note however that these are often presented as (more readable and less formal) sketches that rely on some intuition to fill out the details, rather than full-out formal proofs.
There are a couple of ways to construct such a proof.
Say you have a language K whose computational class is known. (A common value for K is Brainfuck, which is known to be Turing-complete.) You also have a language Q whose computational class is in question, and you want to show that it is computationally equivalent to K.
If it can be effectively argued that every language feature f of K has an equivalent language feature g in Q, then Q can solve at least as many problems as K can.
Note that there are tricky bits here if the language features cannot be "employed arbitrarily" - that is, if it is not possible to achieve an arbitrary effect at an arbitrary point in it. This concept is perhaps a little difficult to define, but a good idea of what it means can be obtained from examples such as Malbolge and Aura.
This is often called an isomorphism; however, such use is inaccurate (an isomorphism would require a mapping from every feature of Q to one in K, rather than just the other way round).
If a compiler which can translate any K program to a Q program with the same semantics is constructed (in any programming language) then Q can solve at least as many problems as K can.
Note that this is really just an automated version of the argument by reduction.
If an interpreter for K can be implemented in Q, then Q can solve at least as many problems as K can. However, see ℒ for potential difficulties with this form of proof.
Note on converses
Note that the converse of each of these only tells you that K can solve at solve at least as many problems which Q can. So, if K is already known to be e.g. Turing-complete, this alone tells you very little. However, if this is shown along with a proof that Q solves at least as many problems as K, both of these together prove that K and Q share the exact same computational class.
Note on algorithms
It is tempting to think that, because some algorithm requires a certain class of computational model for its realization, that an implementation of that algorithm in some system proves that that system is of that computational class. For example, recognizing the string xnyn requires a push-down automaton, and so constructing a recognizer of such strings in some system "shows" that that system is as powerful as any push-down automaton.
This is not a sufficient proof, however, since it does not show that the system allows an arbitrary effect at an arbitrary point. A programming language in which the Ackermann function is implemented, for example, might (arbitrarily) disallow any function which is not the Ackermann function from being implemented, making it unable to solve many problems of which a Turing machine is capable - and thus not Turing-complete.