From Esolang
Jump to: navigation, search

Turing complete?[edit]

Turing complete you say? That is going to require an infinite bit addressing scheme, and each cell must be infinitely long, and able to hold both the (probably infinite) cell part and the (definitely infinite) bit part of an address... I foresee some subtleties upcoming. :) --Ørjan 13:45, 10 July 2009 (UTC)

Browsing the emulator source code (admittedly I don't really know c++ so I have to guess) I see infinite memory and cell size is not yet supported. On the other hand I can see some possible infinite address layouts. This one uses little-endian sub-addresses:

cell no. n
section 0 section 1
bit sub-address section flag cell sub-address
infinite 1 bit infinite
0:0:n, 1:0:n, ... 0:1:n 1:1:n, 2:1:n, ...

--Ørjan 11:31, 14 July 2009 (UTC)

Come to think of it, with this scheme sub-address bits should be 0 from some point on, otherwise you would get addresses that don't correspond to an actual bit. --Ørjan 11:42, 14 July 2009 (UTC)
I cannot clearly understand what you mean by infinite cell size and sub-addresses. There are no sub-addressing within a word, addresses are global and are counted from the first bit in the memory. The fact that addressing is limited by limited memory size does not make the abstract machine less potent. For example, brainfuck is Turing-complete with 16 bit memory cell and 7 (only!) loop levels. For Turing-completeness, the resources have to be just enough to write any other Turing-complete language interpreter. To build this interpreter you need neither infinite memory, nor infinite cell sizes. I noticed that you removed TC sentence and reference in the article. I have written a couple high level programs [[1]] in BitBitJump. They are sufficient proof for me that the language is TC. Can you, please, state that either the examples are incorrect or they are not a sufficient proof? Anyway, thanks for the comments. - Oleg 15 July 2009
A Turing-complete language has to be able to access limitless memory (theoretically, of course that's never possible in practice since such a thing doesn't exist...) -- the problem here is that the 'bits are grouped into words of same size'; and since they can only be grouped in words of finite size, as it's impossible to group the bits in words of infinite size, that results the language being only able to access finite amount of memory, no matter if it technically has an infinite array to use. It may be able to use 5000-bit words, it may be able to use all the memory your computer has, but that is not enough; in brainfuck a program like +[>+] goes on infinitely, the memory is infinite, only reaching an "end" when the interpreter runs out of memory (but that's only an implementation issue, of which the Turing-completeness of the language isn't dependent) -- in BitBitJump a program like that isn't possible. Unless I have misunderstood the language (which is possible, too)... --Keymaker 11:32, 15 July 2009 (UTC)
The size of the word (memory cell) is not specified in BitBitJump. That is the same in Brainfuck. It is an abstraction, which becomes concrete in real implementation. The meaning of it in relation to Turing-completeness is the following. For any possible algorithm there must exist a finite size of memory and a finite size of word, which are enough to implement this algorithm. - Oleg 16 July 2009
This is fundamentally different, however. In BitBitJump the word size (and it must to be a finite number) has to be chosen before anything at all can be programmed. (At least if we want any meaningful programs.) In brainfuck the size of the memory cell has no effect on the memory pointer which is moved around with '<' and '>'. In BitBitJump there is a connection; the maximum and minimum memory address that can be accessed solely depends on the chosen size of the word, and that never allows accessing more than finite amount of memory. Brainfuck is Turing-complete even if the cells allowed only 0 and 1 as their values. --Keymaker 07:08, 16 July 2009 (UTC)
What Keymaker said. Turing completeness is a quite abstract technical concept. I don't doubt that your examples are correct, and I expect you can program it well for any practical use, since real computers have large but still finite memory. But Turing completeness requires memory to be potentially infinite, either at the outset or by growing, in a way that cannot be limited for a program run in advance. (One thing this is needed for, is so that you have an unsolvable halting problem like Turing machines do.) One hint that it is hard to extend BitBitJump to infinite memory is that your assembler has to depend on bit size when deciding how to assemble stuff - the resulting "machine" code changes, and not just by padding cells with more zeros. --Ørjan 14:03, 15 July 2009 (UTC)
And in reverse, I realized the "sub-address" scheme I sketched above won't work with a finite memory size, because section 0 and 1 are then not the same length, being some k and 2k+1-k respectively. --Ørjan 14:30, 15 July 2009 (UTC)

From addressing point of view BitBitJump is in the same category as Subleq or any microprocessor assembly language. If I understand correctly, you say that the language is TC given the word size enough (and memory) for an algorithm, but not TC if the memory or the word size is finite. That would be true for any TC language with limited memory. However, to make a language TC, as I mentioned above, you do not need infinite memory inside the abstract machine. The pragmatics of the language (or environment) can be changed so, that it can provide enough external memory to realize a particular algorithm. For a simple example consider a language which has (among some other operations) 2 input/output operations out and in defined as following: [out 0 something] outputs something; [out 1 N X] stores X at some place N (external to the abstract machine memory); and [out 2 N, in X] reads back X from the place N. N is not bound to memory cell size and with particular encoding can be arbitrary long. Note, that these definitions of in and out operations do not change the language; they change only how to interpret input and output of the language. The interpretation can be quite arbitrary and independent of the language: for example, show to the user only odd bits produced by operation out.The idea behind this is that one can implement an interpreter of a TC language (say Brainfuck) inside a finite abstract machine, hence, making this machine Turing-complete. An interesting point is that if you include pragmatics into the language (as a counterargument), then Brainfuck suddenly becomes non-Turing-complete, because with the conventional Brainfuck pragmatics it is not possible to write a Brainfuck self-interpreter. --Oleg 01:38, 17 July 2009 (UTC)

I don't know about assembly languages, but many of those could be non-Turing-complete, and purposefully, since it's never possible to implement a real Turing-complete language on our hardware, anyway. (No boundless memory.) It's impossible to properly emulate a Turing-complete language in a language that's computationally less capable. You can write a brainfuck interpreter in BitBitJump, but it won't be true brainfuck, it will be brainfuck with a limited array, which is not Turing-complete. And this isn't an implementation issue, it's the limit in BitBitJump; the amount of memory the brainfuck interpreter could access would be limited by the word size that's being used in BitBitJump. For all I (and seemingly Ørjan too, so I'm not alone in this opinion) can see, BitBitJump, as it is, isn't Turing-complete. Of course it could be made to be so -- one way would be making the jump happen relatively, the new address added to the current address, instead of just setting the current address to an absolute value (and to be Turing-complete it would also require, I think, the word size being some (relatively small) x or larger). --Keymaker 11:05, 17 July 2009 (UTC)
Oh, the plan above would of course need relative memory addressing too (otherwise no way to access infinite memory)... So, changing all addresses to relative addressing should do the trick. I think... --Keymaker 13:48, 17 July 2009 (UTC)
I agree. I fixed the TC statement in the article. My discussion above about external memory may not be quite correct since the state of the abstract machine is stored outside the abstract machine. --Oleg 01:41, 18 July 2009 (UTC)

I don't think the article should so casually claim that bbj/ra (bbj with relative addressing) is TC. Obviously, the mere capability to "access" unbounded memory is not sufficient for this. (On the negative side, it seems questionable whether a bbj/ra program can compute even the extremely simple function reverse(), which takes any finite bitstring as input, and outputs the same string in reverse order. Of course the i/o conventions for this program must be the same fixed conventions, whatever they are, under which the language is asserted to be TC.) --r.e.s. (Talk) 16:15, 20 August 2009 (UTC)

Yeah -- I did not guarantee the relative addressing works, I just think it does, however, and I've been to work on the problem, but I've been too busy. (Sorry!) But given time, I think this problem will be solved. ;) --Keymaker 17:42, 20 August 2009 (UTC)
I think bbj/ra will do. Consider the memory is broken into chunks (sections) consisting code and two data parts. The code part has the functionality replicating Brainfuck operations, and the data has a place for a Brainfuck command plus dynamic storage. None of those three parts have to be large. The program similar to Daniel’s DBFI self-interpreter injected as Brainfuck commands will execute any Brainfuck program. I have the assember and the interpreter working in absolute addressing, and it is a fair amount of work to change it to relative – especially the library. So to make test would not be simple, but I think it is possible.--Oleg 04:36, 24 August 2009 (UTC)

Turing complete? (cont)[edit]

Rather than reverting the recent editing of the article, I'll just make the following points ...

The article now contains the following statements: "Hence, BitBitJump is not truly Turing-complete, which is the same for any microprocessor assembly language. [...] BitBitJump assembly used with the library is obviously Turing-complete, because algorithms can be programmed without specification of operand sizes or bit layouts, and the language is of the same power as any real processor assembly language."

Those statements seem misleading at best, for the following reasons:

(1) For a sufficiently complex computable function f, whose computation requires unboundedly large amounts of memory to be referenced, there is no bbj program that computes f. (Consequently, no bbj program can simulate a Universal Turing Machine.) This is because, due to the absolute addressing scheme with fixed finite wordsize, each bbj program references a fixed finite amount of memory. Therefore the set of all bbj programs is not TC.

(2) Each program in bbj assembly language (bbj asl), with unspecified wordsize, describes a set of bbj programs (one for each possible wordsize). Therefore, bbj asl is not TC, because the set of all bbj programs is not TC. This point seems less significant than the other two, because it's the OISC that's the presumed focus of the article, not its assembler.

(3) bbj fails to be TC in a more profound way than do many other non-TC languages. This is because by its definition a bbj program assumes some specific fixed finite wordsize. A bbj program cannot meaningfully assume a variable finite-but-unbounded wordsize, because varying the wordsize would vary the operands. In contrast to this, for many other non-TC languages there is some bounding aspect of memory references which, if allowed to vary unboundedly, would make the language TC. (E.g., the number of tape cells in a Turing machine, or the number of fixed-capacity cells in brainfuck, or the capacity of each register in a three-register Minsky machine, etc.) However, this is not the case for bbj.
--r.e.s. (Talk) 16:09, 4 September 2009 (UTC)

You can consider the memory to be an array of bits without any division to words or cells. The processor reads a number of bits from a particular place in memory, and then interprets those bits as an instruction. The processor itself has to know exactly how many bits to read and how to interpret them. In this respect the machine is not TC since it has a predefined amount of accessible memory. The assembler, on the other hand with a few macro commands, does not have this limitation (note, it does not "describe a set of bbj programs one for each possible wordsize", rather a higher level abstraction). It is similar to C, which is TC, but its compiled code to a particular real processor is not. BitBitJump instruction is not TC but it allows TC computation in the sense that you can compile any TC language into BitBitJump code. For example, I can compile C into it. This is an important distinction because for most non-TC languages it is not possible. Feel free to change the article if you can clarify the content. --Oleg 03:55, 7 September 2009 (UTC)
Your statement saying that BitBitJump "allows TC computation in the sense that you can compile any TC language into BitBitJump code" is glaringly false. As I just explained, exactly the opposite is the case: *No* UTM simulator program in any TC language can be compiled into a bbj program (as defined, with absolute addressing). (NB: the C language is a poor choice for comparison, because it invites terminological confusions. Standard versions of the C language are *not* TC, although calling them TC is a frequently-tolerated misuse of terminology.) To avoid unnecessary disputes about terminology, let's focus on a language that's indisputably Turing-complete, namely the language of the Turing machine model itself. Let U be a program for a Universal Turing Machine (UTM), written in the standard form of 5-tuples, say. U *cannot* be compiled into a bbj program, nor can such a bbj program be assembled by any "high level" assembler, simply because no such universal bbj program exists. Significantly, in my opinion, bbj is more profoundly non-TC than most other non-TC languages, as explained above. ---r.e.s. (Talk) 14:56, 7 September 2009 (UTC)
One idea is to use Bignums or VLQs (I don't know if this is sufficient, however). Another idea is to use a fixed word size with an infinite RAM bank (you could have a RAM area and a ROM area, writing a 0 into the ROM area selects the previous RAM bank and writing a 1 into the ROM area selects the next RAM bank, while RAM banks would all be initialized to copies of the ROM to help making self-modifying code). There are other ways, too, such as relative addressing with Bignums. --Zzo38 16:28, 7 September 2009 (UTC)
It's not so simple as to just use bignums, because each number contains an infinite number of bits, which gives problems with your address structure when you want to address bits inside more than one number. See the table in my second message way up above for one possible solution (using essentially two bignums per address). --Ørjan 05:20, 8 September 2009 (UTC)
See also my further comment above about this scheme not working for finite memory. In other words, as r.e.s. may be implying, BitBitJump makes it particularly hard to do something which works to remove turing-incompleteness in many other languages, namely to define an unbounded version as the generalized limit of finite size cases. It's not the only one though, Malbolge has a similar problem, as I found out when making Malbolge Unshackled. --Ørjan 05:29, 8 September 2009 (UTC)

(to r.e.s.) You cannot make a statement true or prove it just by underlying it or emphasizing it with stars. But there is a good news. Look, I demonstrated that standard C is TC even without system calls to memory management (I assume that is what you meant when saying C is not TC).

int getchar();
int putchar(int);

struct cell
   struct cell *p, *n;
   int v;
typedef struct cell cell;

cell *pp, *mp;

void run()
      int lev=1;
      if( pp->v == '+' ) ++mp->v;
      else if( pp->v == '-' ) --mp->v;
      else if( pp->v == '.' ) putchar(mp->v);
      else if( pp->v == ',' ) mp->v = getchar();
      else if( pp->v == '<' ) mp=mp->p;
      else if( pp->v == '>' )
         if( !mp->n )
            cell m; m.p=mp; 
            m.n=0; m.v=0;
            mp->n=&m; mp=&m;
      else if( pp->v == '[' && !mp->v )
         while(lev){ pp=pp->n; lev-=(pp->v==']')-(pp->v=='['); }
      else if( pp->v == ']' && mp->v )
         while(lev){ pp=pp->p; lev+=(pp->v==']')-(pp->v=='['); }


void readp(cell *p)
   int cmd = getchar();
   if( cmd == '!' || cmd<0 )
      cell m; m.p=m.n=0;
      m.v=0; mp=&m;
      cell c; c.p = p;
      c.n=0; c.v=cmd;
      if(p) p->n = &c;
      else pp = &c;

int main(){ readp(0); }

--Oleg 02:01, 8 September 2009 (UTC)

What I have read is a problem with sizeof forcing every type including pointers to be finite, so you cannot address infinite memory. It's a pretty subtle technical restriction. --Ørjan 05:20, 8 September 2009 (UTC)
C is TC without sizeof operator (it will be TC without many other features of C). TC property cannot be removed from the language by adding extra functionality. The program above is a brainfuck (DBFI dialect) interpreter using stack as its memory field. Note, that neither pointer nor int type sizes are required to be specified. --Oleg 06:04, 8 September 2009 (UTC)
You can render a language non-TC by adding extra functionality, sort of. Adding an operator that tells you what the maximum value of a type is (or its size in bits, or whatever) implies that that type must be finite, whereas without things like that operator, nothing guarantees that ints have a maximum value. Arguably it's a case of adding new restrictions, not new features; would "the programmer can always determine the maximum possible value of a type" be a feature or a restriction, in your view? To a programmer who isn't used to working with infinite memory, that's just extra functionality; a mathematician will notice the hidden restriction there, though. (Fun case to argue with the pedants on comp.lang.c with: is it technically legal to have a C program that uses bignum pointers as long as sizeof, limits.h, size_t, pointer subtraction etc are not mentioned, and switches to finitely large pointers if they are?) --ais523 13:29, 25 September 2009 (UTC)
There is a theorem in discrete mathematics saying that a language superset cannot be of less computational power as its subset. Many theoretical proofs about the computational power of a language based on the proof of the power of the language minimal subset. However if you restrict the language in some sense, it is an implementation issue, which can reduce the language usability to anything. So, I have to disagree with you. --Oleg 13:49, 25 September 2009 (UTC)

Turing complete? (cont 2)[edit]

(to Oleg)

Underlining can help a reader to immediately see what is the essential point of a whole paragraph, and asterisks are a common form of emphasis in ascii. When not misused, both can serve a useful and valid purpose. It's regrettable that you saw them only as trying to "make a statement true".

Irrespective of whether standard C is Turing-complete, the primary point is that bbj (as defined, with absolute addressing) is not TC -- a fact that you've already admitted.

An assembly language is such that all of its programs translate into the machine language of some target computer. If the target is an abstract computer called an OISC, with bbj as its machine language, then the assembly language cannot be TC (because the target language, bbj, is not TC). On the other hand, if the target computer is a universal computer whose machine language is therefore TC (and hence not a bbj OISC), then such an assembler can be TC (e.g., by using bbj-like instructions augmented by other constructs). If the latter is all that you're claimimg about the assembler, then we don't disagree. The arguments given in my point (2) above were assuming the former interpretation.--r.e.s. (Talk) 15:45, 9 September 2009 (UTC)

I did not admit that bbj is not TC. The confusion was that I did not distinguish LBA and TC before. I claimed that bbj is TC, which is in loose sense. You said that “loosely TC is like somewhat pregnant". Please read this page (pay attention to the overview section).
Your theory on language and target machine is weird. According to it, for example, C is TC if compiled to a TC machine, but it is not TC if compiled to your computer. Does this make sense? “This language is TC! Oh, no, that depends on the target machine to which the language is compiled”.
I will repeat what is in the article, here, again: bbj abstract machine is LBA (not TC); bbj assembly with the library is TC. This is proven, in my opinion. If you disagree give the arguments, not exclamations. (Note, that in the first paragraph I used bbj for both abstract machine and the assembly.)
PS I noticed that you were trying to remove bbj from the OISC wikipedia page. Why? Is this the revenge for me saying that Hydra is not an OISC. (By the way Flump is not an OISC either. Guess why.)
;) --Oleg 01:39, 10 September 2009 (UTC)
(Post-discussion remark: BitBitJump indeed got removed from the OISC wikipedia page.) --Oleg 00:43, 25 September 2009 (UTC)
You seem to be getting rather paranoid ;), speaking of "revenge", etc. Concerns about the Wikipedia page edits should be discussed on its Talk page, where they are not out-of-context (as they are here) -- I suggest that you begin participating on that Talk page. As far as my comparing TCness to being pregnant, I concede that it was a poor analogy. It's much better to simply say that bbj is profoundly non-TC.
To address the main issues:
(a) You completely mis-state what I said about assembly languages, which is basic computability theory (not some wierd theory of my own): Not all programs of a TC language can be translated into a non-TC language. If all programs of a language can be translated into a TC language, then the former language might be TC.
(b) bbj is not TC. (TC means TC, not "loosely TC".) As an abstract machine, bbj is not TC, as you just now admitted yet again. Because the abstract machine is not TC, more-limited versions are certainly not TC.
(c) bbj is not "loosely TC", according to the Turing completeness article: "... Turing completeness is often loosely attributed to physical machines or programming languages that would be universal if they had unlimited storage" -- bbj fails that criterion. Abstract bbj cannot have actually infinite wordsize, because programs would have infinite length; futhermore, abstract bbj is not TC with finite-but-unbounded wordsize, because the set of all finite-wordsize bbj programs is non-TC. Because abstract bbj is not "loosely TC", more-limited versions are certainly not "loosely TC".
(d) bbj is not in the LBA class. An LBA would be a TM except for having a maximum tape-length that varies with the input, the bound varying linearly with the size of the input. Therefore a single LBA may be have arbitrarily large storage -- a feat at which every single bbj program fails. Another way of seeing that bbj is not in LBA class is to note that as a class, LBAs *are* "loosely TC" in the above sense, while bbj is not. (Unlike bbj, an LBA has a specific linear function that bounds storage, and removing this bound makes it an ordinary Turing machine; bbj has no such removable bound.)
The last three points emphasize that bbj is profoundly non-TC, as explained earlier in my point #3, to which you never responded.
--r.e.s. (Talk) 15:08, 10 September 2009 (UTC)
The CPU inside your computer is "profoundly non-TC". It cannot access infinite amount of memory; it cannot solve any TC solvable task. You cannot even add to your computer a finite but large amount of memory. But the CPU has this remarkable property that it can run the code compiled from TC languages. Doing this it can solve a task solvable by Turing machine, given that the solution of the task fits inside the computer memory. Now, remove one instruction from the set of instructions of the CPU. What happens? Almost nothing – the computer is still capable of solving the same tasks. Of course, it has to be reprogrammed; the compilers have to be rewritten to avoid that absent instruction and to use instead other instructions to produce functionally equivalent code. Remove another instruction. What happens? The same thing. Perhaps now it would more difficult to change the compilers; memory and time required to solve the same task would have to increase. But still the CPU is capable of solving all solvable tasks as before. Continue this process. At some point after removing a subsequent instruction there would be no possible way to compile a program written in TC into the processor code; it would neither be possible to write the code directly to solve all the tasks as before. This is the point where the CPU fails to be of LBA computational class.
Do you agree with the following: 1. The above LBA explanation; 2. CPU is LBA; 3. CPU and bbj machine are in the same computational class? --Oleg 23:59, 10 September 2009 (UTC)
Your statement #3 is false, assuming the CPU is an LBA-class machine. I just explained why no bbj machine is an LBA-class machine: the latter must be capable of accessing an amount of storage that increases linearly as a function of input-size, if unlimited storage were initially made available to it. (Note that for any fixed amount of storage, the machine must access more than this amount of storage for sufficiently large inputs.) Every single bbj machine definitely fails this criterion. --r.e.s. (Talk) 15:58, 11 September 2009 (UTC)
Maybe there is some confusion here, a physical CPU with memory would be a finite-state automaton (the combined settings of all the (n, say) bits contained in it can be thought of as a state with 2n possibilities), not a linear bounded one (except as far as all former are also the latter, being in a smaller class). --Ørjan 16:18, 11 September 2009 (UTC)
Real computers belong to LBA class, not FSA. An LBA with its memory is driven by its own state and an FSA by some external input. So an FSA can be seen as an LBA, which moves its head only in one direction (and possibly does not require the input to be finite). The confusion here is that either r.e.s. does not understand what is CPU, LBA, or bbj; or I do not understand something obvious. --Oleg 23:59, 11 September 2009 (UTC)
The "something obvious" that you don't seem to understand is that an LBA-class machine will access more than any fixed finite amount of storage. (An LBA will access more than any fixed finite amount of storage for sufficiently large-sized inputs. Furthermore, if a machine is LBA-class, then it must be capable of simulating an LBA for all inputs, which no machine with fixed finite storage can do.) It seems to me that you're wanting to loosely attribute LBA-class to machines that are not in that class, much as you (incorrectly) loosely attributed TC-ness to bbj.--r.e.s. (Talk) 14:05, 12 September 2009 (UTC)
(To Orjan) Notice that I said *assuming* the CPU is LBA-class (as Oleg argues it is), then Oleg's #3 is false. An LBA-class machine is necessarily an abstract machine, because of its requirement of arbitrarily large amounts of storage. (See my replies to Oleg.) --r.e.s. (Talk) 14:05, 12 September 2009 (UTC)
An LBA is a linearly bounded automaton. It has limited storage. If the limit is removed, then this automaton becomes a Turing machine. A CPU on the other hand is not an LBA (Sorry, for that my statement "CPU is LBA" was confusing). A CPU belongs to the same class as an LBA, i.e. is of LBA class. You say: "LBA-class machine must be capable of accessing an amount of storage that increases linearly". This is true for an LBA, but not true for any machine of LBA class; for example, for a real CPU, for which you cannot increase the memory.
My logic is very simple:
1. If language A can interpret a TC language B, then A is TC.
2. BitBitJump assembly with library has an unrestricted brainfuck interpreter.
3. Brainfuck is TC, hence bbj assembly with library is TC.
4. If any program of a TC language A can be compiled into a program of language B, then B must be either TC or of LBA class. Otherwise if B were assumed to be of some different class, then it would be possible to prove that this class is the same as TC or LBA.
5. A program in bbj assembly compiles into a sequence of bbj machine instructions.
6. bbj machine is of LBA class, because it is not TC.
--Oleg 01:58, 12 September 2009 (UTC)
Your logic is based on a misunderstanding, namely your statement that "[An LBA] has limited storage." An LBA is an abstract machine that requires access to unlimited storage -- an arbitrarily large amount of storage is required for arbitrarily large inputs. Furthermore, an LBA-class machine must be able to simulate an LBA for all inputs, which requires it, as well, to access unlimited storage. A bbj machine cannot access unlimited storage, so it is not an LBA-class machine. As I mentioned above, it seems to me that you're wanting to loosely attribute LBA-class to machines that are not in that class, much as you (incorrectly) loosely attributed TC-ness to bbj.--r.e.s. (Talk) 14:05, 12 September 2009 (UTC)

R.e.s., your sentence "An LBA-class machine is necessarily an abstract machine, because of its requirement of arbitrarily large amounts of storage." is disturbing. In my understanding abstract machine is an imaginary machine realizing a language, and it does not have anything to do neither with storage nor with computational classes. For example, a TM is not an abstract machine, but if you define a TM language, then a particular TM would be an abstract machine for that language.--Oleg 15:49, 12 September 2009 (UTC)

You quoted that sentence out-of-context, as it was in reference to a CPU which you said "is of LBA class". I was stating the obvious fact that *if* a CPU is in LBA class, then it must be an abstraction, an imaginary machine. I won't quibble with a definition that makes such an imaginary CPU just a part of an abstract machine, as the point is made either way.--r.e.s. (Talk) 23:53, 13 September 2009 (UTC)

Real computer computational class[edit]

Our understandings are undoubtedly different. Can your computer access unlimited storage? Is it of LBA class? --Oleg 15:08, 12 September 2009 (UTC)

The answers are "no" and "no". That's assuming that by "your computer" you refer to the physical one on my table. Unlimited storage and other such imaginary properties are properly associated with abstract machines, not with physical ones. --r.e.s. (Talk) 23:53, 13 September 2009 (UTC)
Yes, I meant the physical computer you are using. So if it is not TC, and is not of LBA class, then what class it is in? And how would you distinguish it from an abacus?
Another interesting note. "Theoretical computer science: introduction to Automata, computability..." (by Hromkovic) book says on page 123: "The Turing machine is an abstract computing model whose computational power is equivalent to the computational power of real computers." I assume there is a confusion in relation to what computational class real computers belong to.
I think one can build a real TC machine. Let a real computer be able to request a bank of additional memory. Someone is obliged to go buy memory and provide to the computer. The computer can switch between the banks of memory in only left/right way, so the address space of the computer remains the same. Now this whole system, the real computer and the person serving as a provider of additional memory, will be really a Turing-complete machine. I do not specify how long does it take to fetch additional memory - maybe it will require going to different planets for more silicon. But once such mechanism is claimed to be in place, the whole system must be TC.
--Oleg 01:46, 14 September 2009 (UTC)
A physical computer is an FSA: If it has n bits of memory (including registers), then it has 2^n states; if it takes input as a serial stream of bits, it has 2 symbols; it therefore has 2^(n+1) transitions in total, so this is the largest possible program size.
In terms of computational class, you can't distinguish it from an abacus: both have a finite amount of storage; for any given real computer, you can build an abacus large enough to do the same (the difference being, an abacus requires a human operator, and it is debatable whether the operator counts as part of the computation. It depends what rules you use to manipulate the abacus)
One can never build a real TC machine (or even LBA machine), because the Universe as a whole is an FSA. This is because it contains a finite number of whatever the smallest unit of matter is (current theory says quarks), each of which has a finite number of states (for quarks, it's 6 (flavours) times 2 (normal or anti) * 3 (colours) = 36), and the whole has a finite number of arrangements. The problem with your design for a real TC machine is that, for sufficiently large problems, you will have used up all the matter in the Universe. What do you do then when it asks for more memory? To make TC, you need infinite memory and time. To make LBA, you need memory which is a linear function of problem size, and since the problem size is unbounded, you still need infinite memory. A physical machine is a Constant-Bounded Automaton (as opposed to an LBA or Linear-Bounded Automaton): memory is bounded by a constant value.
It is not possible for a machine to contain a machine of higher computational class. This is a general result easily proven. Of course, it can simulate one, but only for sufficiently small problems; once the problem exceeds a certain size, you hit the limitations of the host machine. Since the Universe is, in effect, a machine 'running' a program (its 'language' being the laws of physics, and its 'program' being its entire physical state at any given point in time), any machine you build is contained within the Universe, so you cannot build any machine of higher computational class than the Universe.
Of course, if the Universe were infinitely large, it would then be Turing-complete and so would your design. However, it is broadly accepted in physics that the Universe is of finite size, due to eg Olbers' paradox.
bbj-asm is an FSA because you cannot write a bbj-asm program which implements (contains) an LBA or TM, because the LBA's or TM's program is supplied at runtime, whereas the word-size decision is made at compile-time (and you can't implement a bbj-asm compiler in bbj, for the same reason). Any LBA can contain any other LBA, and any TM can contain any other TM. (Constant bounded automatons, including FSAs, can only contain other CBAs whose constant bound is smaller than their own) 14:59, 14 September 2009 (UTC)
I agree with many of the points you've just made, but I think it could become a tremendous distraction to argue about machines in general and whether the universe is finite or infinite. In particular, it has been explained in detail why a bbj machine, even in the abstract sense, cannot access more than a finite amount of storage; consequently, a bbj machine is neither universal nor is it in the LBA class. Imagining a physical procedure to make unlimited storage available to a physical bbj machine does not accomplish more than what is imagined for an abstract bbj machine. --r.e.s. (Talk) 17:02, 14 September 2009 (UTC)
(the post above res's was me, not logged in) You have a point there. However, my last paragraph gives an alternative explanation of why bbj is not an LBA. As for the rest, what are esolangs, anyway, if not "tremendous distractions"? soundandfury 17:42, 14 September 2009 (UTC)
Since I like tremendous distractions, let me point out that the currently accepted resolution of Olber's paradox does not say anything about the total size of the universe, only about the part visible to us. Basically it is explained by (1) the speed of light being finite and (2) the age of the visible universe being finite. I don't recall there is any real evidence limiting the size of the universe beyond the borders of visibility. (Although I think I've read about hints of effects from big masses just beyond, so that it is at least somewhat larger than what we currently see.) However, even if it is infinite, it might still be impossible to get unbounded computation out of it because of (currently apparently accelerating) expansion and increasing entropy. --Ørjan 18:31, 14 September 2009 (UTC)
Unfortunately, there is a theory that the expansion may ensure that heat death is impossible, meaning we could always get more computation out of it. However, there is another approach: assuming the Big Bang Theory to be correct, the size of the Universe is finite because its age is finite, and although it can expand faster than c (I had forgotten that when making my earlier argument), it still can only expand at a finite rate. Hence the universe is of finite size, and therefore cannot contain a TC machine. (Also, three cheers for tremendous distractions!) soundandfury 19:37, 14 September 2009 (UTC)
I can distinguish a physical computer from an abacus. And I would not trust a guy who can't. And this is not a joke!
A computer and an LBA are FSMs. However there are many other FSMs which do not have computational power of a computer or LBA. So calling a computer an FSA is the same as calling it a thing, which is true but not really sound. As for bbj-asm, I will not be even commenting that. --Oleg 00:27, 15 September 2009 (UTC)
Oh, and another shocking news for some Turing machine experts: a Turing machine never requires infinite memory while it is working. --Oleg 00:57, 15 September 2009 (UTC)
Regarding the speculation on "Life, the Universe, and Everything" I would like to make a few corrections. First, the answer is 42, and not 36. Second, talking about quarks, you forgot about their spins as well as about other particles like leptons and photons, which are not built from quarks. Third, number of matter arrangements may not be finite; it is definitely not proven. Quantum mechanics "current" theory describes physical system with wave functions which are continuous objects. Fourth, your assumption that memory always requires matter may not be true (for example, memory can be a superposition of states). And fifth, you have just invented a new computational class CBA! (Sorry for multiposting) --Oleg 02:25, 15 September 2009 (UTC)
I'm amazed that this guy still doesn't get it; three different people have explained it to him in three different ways. However, I'll try again... Ok, so I can also distinguish a computer from an abacus, but not in terms of computational class, rather, only in more subjective terms. Mathematically speaking, they have the same computational power, because anything which a computer can calculate, so can an abacus, assuming both are the same "size", and assuming a reasonable set of rules for manipulating the beads on the abacus. So, while they are obviously not the same thing, a computer and an abacus are in the same class of computation systems: namely, FSAs, or Constant-Bounded Machines. (Or as you call them, FSMs, which reminds me of Pastafarianism ;) )
An LBA is not an FSA; an LBA is a class more powerful than an FSM. This is because a FSM has a finite number of states (hence the name), whereas an LBA uses a number of states (or, equivalently, amount of storage) which is bounded by a linear function of the size of the problem; eg. if you give your LBA a list of 500 numbers to sort, the size of the problem is 500, and the amount of storage used is at most 500a+b where a and b are two constants dependent on the sorting algorithm used.
Of course there are FSAs which do not have the computational power of an LBA; this is because an LBA is a higher class than an FSA. The reason why there are FSAs without the computational power of a computer is because FSAs, being a particularly low class, are not homogeneous - in other words, some FSAs are more powerful than others, unlike Turing Machines which are all equally powerful and hence universal. The reason FSAs are not homogeneous is because some have more states (or, equivalently, storage) than others. Your bbj-asm, when compiling to a word size n, produces an FSA with a total of (2^n)^((2^n)+1) states (the +1 is because the implied PC register holds state too). This FSA cannot properly interpret a second bbj program of word size n, since there is inevitably some overhead in running an interpreter; hence bbj programs can only properly interpret strictly smaller bbj programs. (Similarly, a computer with, say, 128kB of RAM and no disk, cannot properly run an interpreter for programs in its own machine code, because if a program tries to use all of the memory, the interpreter won't be able to give it all of it because the interpreter needs some for itself) This is in stark contrast to LBAs and TC machines, which can properly interpret their own languages, because their memory is, in the case of the LBA, bounded by a linear function of the input size (which can, of course, be greater than the input size itself), and in the case of the TC machine, unbounded.
As for your "shocking news", a Turing machine requires unbounded memory, in order to be a Turing machine. In other words, it must be allowed to use more memory than any finite function of the input size. While it will never request the 'infinity-th' storage cell, it remains true that for any natural number n, it will, for sufficiently large input, request the 'n-th' storage cell; so no machine with fixed, finite storage (or even bounded storage, be that linear, quadratic, or even exponential) can ever be a Turing machine, because sufficiently large inputs will exhaust its storage. Consider the case of testing an input string to check whether brackets are correctly nested in that string. Let's say your FSA has 1000 states. Input a string of (s 1001 bits long, and it can't cope. Ok, you say, I'll increase its number of states to a million! Sorry, it still can't handle a string of (s 1000001 bits long. In fact, no FSA can solve the nesting problem for all possible inputs. An LBA, however, can be programmed to solve this problem; it only needs at most n states, where n is the size of the input. The important difference between the FSA and the LBA is: the number of states of the FSA is decided when the FSA is programmed; at 'compile time', if you will. The number of states of the LBA is determined at runtime; all that is determined beforehand are the coefficients of the linear function. A Turing machine does not even need to do that, since its storage, the tape, is infinite (or at least unbounded, which comes to the same thing).
As for your last edit, I would like to point a few things out. Firstly, showing off your knowledge of the Guide will not win you the argument (besides, you're wrong. That's the answer to the Ultimate Question of Life, the Universe, and Everything). Secondly, it doesn't really matter about the precise number of states of each particle, the point is that each particle has a fixed, finite number of states it can be in. Thirdly, quantum mechanics current theory involves quantised spacetime; some have even suggested that physical processes at the lowest level seem almost computational in nature, claiming that they are in fact a cellular automation (which is a kind of FSA if restricted to a finite amount of space). Fourth (or FORTH if you like that sort of thing), if an object has a set S of possible states, such that the order of S is finite (this is the case for quarks, leptons, etc), then its set of possible superposed states is P(S), the power set of S (ie., the set of subsets of S), which is also finite. Four-and-a-half-th, don't start sentences with 'And'. Fifth, a CBA is just another name for an FSA, which I invented purely to contrast it more clearly with the LBA class. soundandfury 03:18, 15 September 2009 (UTC)
Can I suggest you to think less "in terms of computational class"; otherwise you may get confused and start typing your next message on an abacus. And sorry for "And" - it indeed makes the sentence difficult for understanding. (But) thank you for the explanation.
And another quick suggestion to soundandfury: goto LBA wikipedia page; find below the section "external links"; find the first link there, which is a pdf document (by Forbes D. Lewis); and read the first two paragraphs; do not read the definition as it is difficult. --Oleg 05:18, 15 September 2009 (UTC)
I had wondered about that sentence in the introduction of the wp page which seemed to agree that LBAs were somehow close to real CPUs. I guess that counts as a citation. :) I think this viewpoint sort of requires that the CPU be somehow built around containing the input. --Ørjan 13:43, 15 September 2009 (UTC)
(To Oleg) LBAs and TMs are abstract computational models, which are said to differ in how well they model physical computers. They are necessarily abstract, because both imagine a single machine to have a tape of unlimited length. The length cannot be limited, because a single machine must take arbitrarily large inputs. Like the other references, the article you cite treats a single LBA as a language acceptor, which requires the input to be a word of arbitrary length. --r.e.s. (Talk) 15:28, 15 September 2009 (UTC)
An LBA and LBA computational power are two distinct concepts. It will make it clearer if you think about the difference between them. By the way, you have not responded to my questions to you in my message above (23:59, 10 September 2009 (UTC)). I will repeat them to reduce chances of misinterpretation. 1) Do you agree with my description of LBA concept in that message above? 2) Is a physical computer of LBA class? 3) Does bbj machine has the same computational power as a real computer? You do not have to explain, just answer with yes/no, for example "yes,no,yes" or "no,yes,no". --Oleg 22:27, 15 September 2009 (UTC)
(Please see my dedented reply after the one by soundandfury below.)-- r.e.s. 14:19, 24 September 2009 (UTC)


--Oleg 00:20, 17 September 2009 (UTC)

(Dedent) Oleg, read this carefully. An LBA does not exist in the real world. You can't take some beige box full of silicon and say "that's an LBA", because it isn't. The only thing that is meaningful with regard to LBAs is to discuss a computational framework - a language - which has LBA computational power. bbj does not have LBA computational power, for reasons which have already been explained to you. Nor does a physical computer, for reasons which have, also, already been explained to you. However, many languages which do have LBA computational power can be partially implemented on real computers - machines which don't. The reason this is only partial is that for a sufficiently large input, the implementation will fail because it has exhausted the machine's available memory. This does not, in itself, prevent the language from being LBA-class. However, if a language contains some arbitrary restriction on the amount of memory it can access - eg if it only allows absolute addressing and its memory cells each have a fixed number of states - then that language is not LBA-class, because even if infinite memory were available to a machine for it (which is of course impossible), that machine would still not be an LBA; whereas an LBA language running on an infinite memory machine would be an LBA. Do you get it now? soundandfury 22:39, 15 September 2009 (UTC)

Here, your words "for quarks, it's 6 (flavours) times 2 (normal or anti) * 3 (colours) = 36", but later you say "it doesn't really matter about the precise number of states of each particle". If you do not value your own words you write, do you think others will take them seriously? From the way you write I suspect that you are very young. If so, save your energy and your tales about the Universe and "current theories" for your girlfriend. And, please, be polite.
I do not think you actually read other's messages. 1. Can you answer with short yes/no answers the following? 2. Did you read that pdf document (by Forbes D. Lewis) I mentioned above? 3. Do you know/agree that an LBA class representative may not be an LBA? Simple questions - simple answers. This time I shall "get" it, promise. --Oleg 05:04, 16 September 2009 (UTC)

(Dedent) (To Oleg) Computability theory is a formal theory concerning the computation of mathematical functions that have infinite domain (e.g. N → N, Σ* → Σ*, etc.). (TMs and LBAs are, similarly, formal mathematical constructions.) Please check any good textbook on the subject. There is no formal theory of computability regarding functions whose domain is finite, so there is no theoretical context in which to discuss the "computational power" of machines that compute only such finite objects. In fact, Rado referred to this situation in regard to the computation of an individual well-defined integer (a finite object) as being what motivated his research on the calculation of BusyBeaver(n) for a specific value of n, even though BusyBeaver: N → N is not a computable function. To reply to your latest request concerning your questions 1-3: they are all incoherent, as I already stated, because they do not refer to a well-defined concept of computability or computational power. (By the way, it's bad form to try to control the flow of this discussion by creating sections that remove a person's reply from its proper context. I've restored soundandfury's reply to this section, where it belongs.) --r.e.s. (Talk) 14:31, 16 September 2009 (UTC)

If you do not answer the questions I asked at (Oleg 22:27, 15 September 2009) in yes/no form, I consider that as chickening, or mocking at me. In both cases no dialog is possible. --Oleg 00:20, 17 September 2009 (UTC)
I mispoke above when I said all three questions were incoherent. Let me take them in reverse order: #3. A yes/no answer (as requested) seems impossible at present, assuming that by "bbj machine" you meant a physical machine. That's because no precise meaning has been given to the term "computational power" as applied to physical machines (unlike the case for abstract ones). #2. No. The LBA class contains abstract machines, which are mathematical constructions, not physical ones. #1. No. I do not agree with your description of the LBA concept. --r.e.s. (Talk) 06:52, 17 September 2009 (UTC)
I see where we disagree. Lets forget about LBA. In your opinion, we cannot distinguish between the processor with a complete set of instructions (the one which is able to run TC program compiled into its instruction code) and the processor with the incomplete set (the one which is obtained by eliminating several instructions from the set). Or if we can tell tell the difference, we cannot tell that the one is more powerful then the other. Is my understanding correct? In my opinion, we can tell the difference: any physical device will be either equal to the first or to the second assuming that number of its internal states is enough to simulate or mimic that processor. For example, take subleq instruction with limited operand and limited memory. For such subleq processor I can find the amount of memory, which is sufficient to simulate my old Intel 8086 computer with 640k of memory. But if I replace subtraction operation in the subleq instruction with addition (addleq), it would not be possible to simulate the computer regardless of amount of memory. So it is a matter of different beliefs. --Oleg 13:27, 17 September 2009 (UTC)
Bad example. I just realized that Addleq would be TC, as well, but more twisted one! --Oleg 01:55, 18 September 2009 (UTC)
I have this vague memory of having seen an addition-based OISC before somewhere, in a printed book or something many years ago. No idea which book it was though... --Ørjan 05:25, 18 September 2009 (UTC)
I have not seen it before. I always thought that it would not be possible to make computations with addition; otherwise it should be mentioned alongside of any subleq description. If you find or remember a reference please let me know. Thanks for typo corrections. (By the way, probably better to go to Addleq discussion page) --Oleg 06:49, 18 September 2009 (UTC)
For a finite computing system (machine+resources), the mere number of states isn't going to measure the machine's "computational power" in a reasonable way, it seems to me. Here's my take on this issue ... To say that one computational system can simulate another is to say that every function computable by the latter is computable by the former. Notice that only a function with finite domain can be computed by a finite computing system. Now, in computability theory, there is a subtlety that trivializes the computation of any finite object; specifically, all that needs to be done is to output the input encoding of the object -- nothing more needs to be done. No instructions at all are needed to do this (unless "halt" is regarded as an instruction). A function f with finite domain is just a finite set of pairs (x, f(x)), and by merely placing an encoding of that set into initial memory and then halting, nothing more needs to be done -- the function f has been trivially computed. Consequently, for a finite computing system, "what it can compute or simulate" primarily reflects the quality of the i/o encoding scheme and the amount of finite resources, rather than the "power" of the machine. --r.e.s. (Talk) 16:15, 18 September 2009 (UTC)
This is true from one point of view. From the other, however, it would not be possible to simulate in practice even a programmable calculator in the way you described it. It is possible though to simulate one real computer with another real computer. The problem is not in the number of states of a machine, but the ability of the computer to process the information in the abstract way. What I mean is that if in a real computer you disable the conversion of the data into the code, it would stop being a computer in the sense that it would not be able to simulate another computer. --Oleg 00:24, 21 September 2009 (UTC)

Discussions continue on LBA and OISC. --Oleg 05:25, 24 September 2009 (UTC)

Input model and complexity classes[edit]

I think this FSA/LBA/"Bounded storage machine" question hinges on what BitBitJump's input model actually is. If it has input placed initially in memory, then LBA may be the right thing. However I think I saw an assembler command for reading a character from input, and if that gets it from something like a "magic" memory address (as some OISC do) then it would be closer to an FSA. These two possibilities also apply to real CPUs. In fact that's more or less how I read the two options in the Bounded-storage machine article, which is why I tried changing the links to that since the input model of BitBitJump was not clear to me. Except that I intuitively find that "bounded storage" implies a bound that is set independently of input size, and that this gives a maximum length for possible input if it is to be placed in memory in advance. The BitBitJump bit size can be seen this way too.

As for genuine complexity classes, excluding infinite input, FSA is clearly in a smaller class than LBA, one being O(1) space and the other being O(n) space. Note that for defining space classes smaller than O(n) the input cannot be placed on the same Turing machine tape as "working" memory. The book I borrowed on the subject placed it on a separate, read-only tape (but still movable in both directions). I'm not sure if that applies below O(log n) though. Below that is when the working tape cannot even contain a pointer into the input tape, and things seem to get ugly to me. (Does the position on the input tape then count as "extra" memory?) I don't recall anything clearly about that case though. --Ørjan 09:10, 12 September 2009 (UTC)

I just quickly glanced at the BSM article, and a point of confusion is the idea of "bounded storage". If a BSM actually has fixed finite storage, as the article seems to imply, then an LBA, for example, is not a BSM. It's one thing to place a bound on the amount of storage accessible for a given input size, but it's quite another to place an absolute bound on the overall amount of storage irrespective of input size. E.g., an FSA nay result in the one case and not in the other.--r.e.s. (Talk) 14:29, 12 September 2009 (UTC)
Input to a Turing machine is not the same as input operation in programming languages. The presence or absence of input operation in the language does not change its computational class. BSM cannot define a computational class, it is rather a classification of abstract machine. Another thing is that adding extra funuctionality cannot reduce the computation capacity of the language. For example, if bbj is of LBA class without input operation, it cannot become FSA class when input operation is added, bacause FSA is a smaller class than LBA (if I understand that right). My point was that bbj has the same computational class as any real computer. Is it right? I think r.e.s.'s answer would be "no". Does anyone esle has the same or different opinion? --Oleg 15:24, 12 September 2009 (UTC)
Incorrect. When you add an input operation, you allow a completely new way of getting input which can be much longer than what fits into the machine. The O()-style complexity classes are defined based on the length of input, and if you allow longer input while not otherwise giving the machine more power, it can actually do less with a given length of input than if you had to increase ordinary working memory to fit the whole input in it. The way you define input to a computational model is essential to what computational class it is in. In the most basic mathematical theory ignoring output, a computational class is nothing other than a set of languages, where a language is a set of input strings (intuitively, those that give a "yes" answer from a machine corresponding to that language, although mathematically a machine is not even necessary.)
Actually, O()-style complexity classes for a computation model are not really even well-defined unless that model provides some way of handling unbounded input. At best, you get O(0). --Ørjan 17:33, 12 September 2009 (UTC)
Oh, and my "Incorrect" was to the idea that adding functionality cannot reduce computation class. I do agree that bbj (with a fixed bit size) has the same computational class as a real computer. --Ørjan 17:37, 12 September 2009 (UTC)
Where can I read about O()-style complexity classes? And what is the book you mentioned above? --Oleg 22:45, 12 September 2009 (UTC)
I think it may have been Christos Papadimitriou's Computational Complexity (I found it when browsing wikipedia yesterday, and the name rings a bit of a bell), but I am not entirely sure. It may be heavy mathematically (and I say this as a mathematician who thought it was just right for me). Otherwise, you may start at, although I don't know how readable that is if you don't already know the stuff. --Ørjan 10:48, 13 September 2009 (UTC)


The article refers to a bbj self-interpreter, which would be a single bbj program capable of interpreting all bbj programs. Excuse my scepticism, but I suspect that you are misusing the terminology, and that you have no such bbj self-interpreter. If you do have one, it could be neatly displayed by simply stating its wordsize and showing the program as a list of integers. For the benefit of those who don't wish to master your assembly language (which evidently does not produce bbj programs anyway), would you please post the self-interpreter here as such a list, or provide a link? It would be interesting to compare it to, say, Clive Gifford's subleq self-interpreter, which is a true OISC self-interpreter comprising a sequence of fewer than 120 integers. --r.e.s. (Talk) 11:44, 15 September 2009 (UTC)

I can see where your scepticism is coming from. If you compile that code into bbj machine code using the assembler, you will get this list of numbers. There is no need to understand the assembly language. Is that what you ask for? In some sense you are right: strictly speaking that program written in assembly is not a Self-interpreter, it is a bbj machine interpreter written in bbj assembly. It would be a true self-interpreter if the code of the assembler in C++ is rewritten into bbj assembly, which is possible but very hard and pointless. I ran "hello world" program with bbj interpreter in between the emulator and the program. It took several hours on my machine to print the message. So bbj interpreting bbj is really slow. NB: Just in case if you want to start a discussion whether a Self-interpreter can interpret any program or only the one which fits in memory, I am out of it. --Oleg 03:35, 16 September 2009 (UTC)
As before, you are completely avoiding the issue. It is not a question about what fits into the finite memory of an implementation. It is a question about whether you have a bbj self-interpreter in the abstract sense. If you do not have one in this sense, it is ridiculous to pretend that you have one at all, because it would not be an implementation of a true self-interpreter. The cited subleq self-interpreter, for example, is a program that, when placed in the unlimited memory of an abstract subleq OISC, can interpret any subleq program. An implementation of a bbj self-interpreter is not a true bbj self-interpreter. If you do not have a bbj self-interpreter in this abstract sense, then please, simply say so. --r.e.s. (Talk) 14:55, 16 September 2009 (UTC)
No problem. I do not have a bbj self-interpreter in this abstract sense. However I have a bbj self-interpreter in the concrete sense; which interprets bbj programs only under condition that they fit in memory after the interpreter. --Oleg 00:33, 17 September 2009 (UTC)
That last statement seems misleading to me. Maybe it should be called a "partial self-interpreter (PSI)? Here's the problem ... Every bbj program has some fixed wordsize, which sets a maximum on the number of bits it can address. No matter how much memory is made available to it, your PSI cannot interpret any program longer than the number of addressable bits remaining after the PSI. (This even omits the workspace that would doubtless be needed in addition to the PSI+program. It also ignores the question of how the PSI interprets programs with a different wordsize. But nevermind those issues for now.) --r.e.s. (Talk) 07:12, 17 September 2009 (UTC)
Theoretically yes (except that there is no such term as PSI), practically no.
The relation between word size and the amount of accessible memory is not linear, but logarithmical. That means that if the wordsize, say, is 512 bit, then the amount of accessible memory is 10141Tb, which is above any imaginable limit, i.e. practically unlimited. Even if I create a computer, which runs through the memory with a speed 1Tb per picosecond, it will reach the limit in about 10121 years.
It would not be difficult to write interpreter which takes the program with different wordsize. This is because bbj does not require the memory to be broken into words. For example, it can jump execution into the internal bit of the word. The bits have to be interpreted as words from this current address. --Oleg 08:24, 17 September 2009 (UTC)
The problem with such "finitistic" thinking is that no matter what fixed wordsize is used, essentially all (infinitely many) bbj programs remain in principle uninterpretable by a single bbj PSI. (The term "Partial Self-Interpreter" is my own invention, referring to a would-be self-interpreter -- a single program -- that can interpret some programs in the same language, but for which there exists an uninterpretable program in that language no matter how much memory is made available.) Only finitely many programs are interpretable by your PSI (which must use some particular fixed wordsize), while infinitely many are not, no matter how much memory is made available. It's this distinction that sets bbj apart from the common OISC languages: Memory usage by a bbj program is limited by an inherent feature of the language itself (the wordsize), whereas memory usage by a program in a common OISC language is limited only by how much memory is made available to it.
As to your second paragraph ... Granted, the fixed wordsize applies to interpret a word beginning at whatever bit is current in the flow of execution (so words may overlap, etc.). However, it seems to be pure speculation to say you can write a single bbj PSI (which will use some fixed finite wordsize) that interprets bbj programs with arbitrary wordsize. --r.e.s. (Talk) 16:05, 17 September 2009 (UTC)

Simplest OISC[edit]

No it's not, the simplest OISC is the one instruction "nop". The instruction does nothing, then increments the IP.

If you mean the simplest turing complete OISC, hasn't it been shown that BitBitJump is not TC? ehird 05:18, 25 September 2009 (UTC)

...the simplest OISC able to simulate (interpret) a real computer in polynomial order. --Oleg 06:56, 25 September 2009 (UTC)
Okay, define "a real computer". I mean, a non-turing-complete esolang is generally considered underpowered because theory is all we have, and anything with finite memory is inherently trivial: simply a table of states at most, so I'm not sure the "simplest OISC" moniker should be applied to something sub-TC. ehird 09:47, 25 September 2009 (UTC)
It depends on what you mean by "simplest", I think. BitBitJump is simpler mathematically than MiniMAX; but MiniMAX is likely to have a shorter implementation in most programming languages (e.g. a main MiniMAX loop is only 8 bytes of x86 assembler). MiniMAX also has the advantage of being Turing-complete if given infinite memory. --ais523 13:15, 25 September 2009 (UTC)
(to ehird) There must be an average sized box under your monitor or table. That is a real computer. And believe me, it is not “inherently trivial” – you will not be able to replace it with “a table of states” even though it is a finite object. You can try to build a table representing states of a simple arithmetic calculator as an exercise. --Oleg 13:27, 25 September 2009 (UTC)
Finite memory still leaves plenty of room for theory. With the LBA intuition I think you can interpret running an arbitrary BitBitJump program as being PSPACE-complete, which is more than nop is. Although it still depends on how you define simplest. --Ørjan 13:48, 25 September 2009 (UTC)
Simplest OISC in the sense of the CPU design. My logic is this: OISC is the a final level of RISC; RISC is a CPU design; hence I consider OISC as a CPU design (in opposite to other opinions about OISC Turing-completeness). I think BitBitJump is the simplest, because any other known processors have to use logical gates AND, XOR, or others, which are more complex operations than bit copying. I really suspect that it is possible to make an even simpler computation model based on bit inversion (again without logical operations). Bit inversion is simpler than copying operation because it is reversible. But to invent this simpler model might be difficult. --Oleg 14:06, 25 September 2009 (UTC)

A possible simplification[edit]

Re-reading the spec this year has led me to believe that you can reduce the language further. While it is convenient to have A pointing to a bit somewhere, I am not sure that it is entirely necessary. So far as I can see right now (note, this is without thorough investigation) it should be possible to have A be evactly one bit. That is, if A then copy one else copy zero. This shouldn't really effect the computational ability of this language as instead of directly referncing some condition bit, a few more instructions occur to copy that condition bit into the instruction that needs to use it. On second thought, I am not sure whether or not it is possible to reach a stage wherein you can conditionally copy a bit if that stage depends on conditionally copying a bit ... ah well, never mind then. -- Hiato 11:20, 18 June 2010 (UTC)

Gaining Turing Completeness[edit]

I propose a simple extension of BitBitJump to make it Turing Complete. We have a BitBitJump machine with word length M. We choose a specific memory location (I recommend 2^M-1) which, when written with a 1, increments M then goes back to zero. This is similar to the I/O methods described in the article, as it relies on magic memory locations known to the interpreter. Programming such a language would be a nightmare, since the semantics would vary depending on the value of M, but this is a practical concern rather than a theoretical one.

We can imagine a program similar to the Brainfuck program +[>+] above like this:

The initialisation:

Write M 1s to memory, starting at location A.

The loop:

Set the program counter to A and attempt to write a 1.
The interpreter increments M.
Write a 1 to location A+M.

We may struggle to make reusable components and libraries, but we can certainly write standalone programs which use arbitrary amounts of memory.

First we parameterise our algorithm to use arbitrary-size words.
We code it in BitBitJump with words of the current M.
We add a branch which only enters the algorithm if M is large enough for it to succeed
(this may read parts of the the algorithm, but shouldn't modify it).
The "else" condition of the branch should traverse the block of memory containing our algorithm
and branches, from end to start, offsetting the words to have length M+1.
We then increment M and jump to the test again.

-- Warbo 19:18, 16 February 2012