Talk:bitch

From Esolang
Jump to: navigation, search

This is the talk page for bitch.
Use this to discuss syntax, programs, or even to prove stuff about the language!


Note: For the sake of everyone's collective sanity, do not delete anything!
Even if nothing comes out of one of these discussions they are still useful to show thought processes and progress within certain problems.

Attempts at proving Turing-completeness

Since bitch is a newly created language and is fairly different to a lot of the languages usually used to prove Turing-completeness, a proof is hard to come up with.
Below are the attempts currently ongoing. Feel free to join one or add a brand new one!

[Ongoing] Equivalency between bitch and Home Row by User:Helen

I've just realised that proving bitch's equivalency to brainfuck is not too hard.
The language to compile to has been changed to Home Row as per A's comment.
Will be updated once I finish on the incrementing code.
The operations in Home Row and their equivalence in bitch are below in their sections.


I am pretty sure that brainfuck cannot compile to bitch, since brainfuck requires nested loops to be Turing-complete. bitch can only nest an infinite loop once. I suggest that Home Row might result a more efficient compilation proof. --A (talk) 11:08, 15 April 2019 (UTC)

Of course, I completely forgot about the loops. I had a look at Home Row and it seems to be all good but I didn't quite understand the loop definition. Is it similar to bitch's? - Helen (talk) 13:14, 15 April 2019 (UTC)
Oh I understand the loop now. It might be a problem since bitch's loops runs for the first time no matter what whereas Home Row's runs only if the current memory value is 0. - Helen (talk) 13:24, 15 April 2019 (UTC)

Anyway, the way of executing loops does not affect its Computational class. --A (talk) 13:50, 15 April 2019 (UTC)
You are understanding the loops the wrong way. It says that the loop will run if the current memory spot is nonzero. --A (talk) 14:55, 15 April 2019 (UTC)

Oh of course, my bad. But either way there is no reasonable way to replicate that in bitch in a general way. I'm glad it doesn't affect the computational class; phew - Helen (talk) 19:57, 15 April 2019 (UTC)

Astonishingly, int-e has implemented bounded-storage brainfuck, disregarding to this ongoing work. --A (talk) 10:39, 18 May 2019 (UTC)


To be done because it's a pain and involves a pair of pointers that get modulo'd and multiplied and stuff like that and no-one's up for that.
Or alternatively (integer) division can get involved which is even worse.
Probably possible though.

Decrementing idea

Decrementing is also easy to do: scan from the rightmost bit to the leftmost bit, until we reach a 1 bit. That 1 bit turns into 0 and all the bits lower than it becomes 1.

It's tempting to say that decrementing is possible because incrementing is possible but I don't know if I can get constants that allow me to set the correct bits to 1.
Setting the correct bits to 0 was done by getting the bits into storage and doing #|0 which clears storage, however the same trick can't be done to set the bits to 1.
By inverting the memory before we clear storage we can set the bits to 0 when inverted which will be 1 when inverted back. In fact, the whole incrementing algorithm should decrement if the beginning value is inverted.
Fun!

Therefore, the correct decrementing algorithm is

unknown and still a work in progress

Some thoughts

The provided implementation has a 63-bit accumulator and bit-storage, which is pretty small, and Java has builtin bigints so you might want to use that. Also, according to this StackExchange answer, a 2-stack Push-down automaton, which might be emulated using the accumulator and bit-storage, is equivalent to a turing machine. So this could be a possible way of proving TCness. Although the lack of nested loops might be a problem. TuxCrafting (talk) 08:50, 17 April 2019 (UTC)

There also seems to be some inconsistency with rapport to >...<: the spec says for < that it "jumps to the nearest >" but the implementation saves the position of a > only when executed. So for example in >#0;><, should the < jump to the first or second >?
I will ask Helen; I have no idea. --A (talk) 11:26, 17 April 2019 (UTC)
Wait, where did you find that code snippet?
I wrote it to show the problem. It's hypothetical, but having a well-defined spec is always a good thing. TuxCrafting (talk) 11:41, 17 April 2019 (UTC)
I ran a similar program and found that it jumps to the first ">". You are right; the documentation indeed has to be modified.--A (talk) 12:04, 17 April 2019 (UTC)

:I think that implementing a 2-stack Push-down automaton would be trivial, as 2 stacks already have the power of simulating a tape. It might be better if you directly simulate a tape. --A (talk) 14:52, 29 April 2019 (UTC) Spec problem fixed and the rest talked about in my user talk. - Helen (talk) 15:16, 17 April 2019 (UTC)

Another idea

It seems that we are having trouble implementing my incrementing/decrementing algorithm. That is why I made a improvement of the incrementing/decrementing algorithm.
This is what I typed in a Lua prompt. This improvement is inspired by this:

Lua 5.3.5  Copyright (C) 1994-2018 Lua.org, PUC-Rio
> ~-1
0
> -~1
2
>

~-x is equal to x decremented, and -~x is equal to x incremented.
-x is the negation of x. ~x is a bitwise NOT.
In case you question me about my algorithm, I will add more examples here:

> ~-9
8
> -~9
10
> ~-12
11
> -~12
13
>

These expressions are universal, which means that you can also run it in Java, Python, etc.
It requires a front-register, which saves a pseudo-top element of a number that changes dynamically when a new bit is used. Negating inverts the pseudo-top element, and bitwise NOT-ting inverts all the bits from the pseudo-top to the bottom. If there are 8 bits for each cell, 3 bits are sufficient for representing the front-register. However, this results in recursive incrementing the front-register every time a bit is used, which might be quite hard to do.
Inverting the bits after the pseudo-top might be sufficient.
It is highly probable that I am misunderstanding these expressions.--A (talk) 10:01, 29 April 2019 (UTC)

Bitch might be Turing-incomplete

If we treated the accumulator as a counter, with [1 for increment and ]1 for decrement, we could treat it as a 1-register Minsky machine. However, two counters are required for an increment-decrement machine to be Turing complete, as one-counter Turing equivalence requires multiplication and division, which are impossible without a temporary.
Then, if we treat the accumulator+shift register as an unbounded bit tape, it still cannot be Turing complete, as we don't have the ability to non-destructively check only one bit of said tape, only checking if all the bits in the accumulator section are zero or at least one bit is non zero.
Maybe other models are possible, but it does seem like it's not Turing-complete. TuxCrafting (talk) 15:33, 17 April 2019 (UTC)

You can't independently check the value of a specific accumulator. Neither can you increment or decrement them, without making a loop, which would prevent backward state changes due to the non-existence of nested loops or gotos. TuxCrafting (talk) 11:21, 18 April 2019 (UTC)
In addition, I'd like to see an algorithm that can copy a bit in the accumulator to a different position, but *without* using loops, as using a loop in such an atomic operation would instantly break Turing-completeness due to the necessity of a global program loop to apply state changes. TuxCrafting (talk) 11:28, 18 April 2019 (UTC)
To clarify: what I mean is that bitch doesn't have any way of control flow other than the non-nestable >< construct and conditional forward jumps. However, for Turing completeness, conditional backward jumps are also required. This can be emulated by putting most of the logic in an unconditional loop, but then it is impossible to build inner loops. TuxCrafting (talk) 11:44, 18 April 2019 (UTC)

Nevermind, I didn't realize you could put a valid bitch instruction after a OR, which allows copying data from the storage. The article is quite unclear on this point. It might just be Turing complete, actually. TuxCrafting (talk) 21:11, 21 April 2019 (UTC)

Completely my fault, the ability to nest instructions like that is one of the most important parts of the language but it gets about two sentences on the main article.
I'm gonna rewrite most of the main article soon; I've been gathering feedback about things that generally aren't very clear. - Helen (talk) 22:40, 21 April 2019 (UTC)


[Unsuccessful] Compilation to Z3 derivative by User:Helen

This attempt contains programs that might be useful to prove the Turing-completeness of bitch.

User:A - Basic compilation

This programming language looks like Konrad Zuse's Z3, which was already proven to be Turing-complete by Raúl Rojas here.

First of all, Z3 is capable of executing an infinite loop. This is achieved in bitch as the > < block.
Also, Z3 has math operation evaluation capabilities.

Z3 can implement 5 operations: addition, subtraction, multiplication, division, and square-rooting.
Even though these arithmetic operations have not been shown to be implementable in bitch, only the operations of zeroing a value and incrementing a value will be necessary for Turing-completeness, as mentioned here.

Raúl Rojas shows that an instruction set with load, store, increment, zero and unconditional branching is Turing-complete.
However, Load and Store have to be indirect. They should be nested for at least 2 indices.

Also, infinite memory (a requirement for Turing-completeness) can be implemented in the Z3.

Finally, Z3 has a way to stop the program by using an invalid operation. That is . in bitch.

User:Helen - Implementing arithmatic operations

This part was proven partially unneccesary, as A had found a command set that is easier to compile to bitch. On the other hand, these programs might make bitch easier to program in.
The conditional snippet below is easy to replicate in bitch:

if (z = i) then 0 else 1

We simply use XOR to check if a number z<code> is equal to some constant <code>i like this:

#z^i;#1

If they are equal, the result of XORing them is automatically 0.
If they are not, the result is some non-zero number which we can detect (through the conditional ;) and then set the accumulator to 1.

Mathematical operations are a bit harder to pull off.
I have managed to get an algorithm for addition.
Subtraction can be imitated through addition with negatives (due to algorithm overflowing and actually giving the correct number). It can't.
I haven't managed to get an algorithm for multiplication or division yet.


Addition:
Binary addition is very simple.

Ignoring carry overs, each digit behaves like this:

a b a+b
0 0 0
0 1 1
1 0 1
1 1 0

This is equivalent to XOR.

In order to get full addition behaviour though (rather than just "fairly close but not really" addition behaviour), we have to consider carry overs.
This is how carry overs behave:

a b a+b's carry
0 0 0
0 1 0
1 0 0
1 1 1

This is equivalent to AND.
The carry overs are put onto the digit one order higher though.
Therefore, we left shift each carry once.

This gives us our full (recursive) algorithm.
In pseudocode:

a <- input
b <- input

oldA <- a+1
while (a != oldA) {
    withoutCarries <- a XOR b
    carries <- (a AND b) LS 1

    oldA <- a
    a <- withoutCarries
    b <- carries
}

This is perfectly doable in v4.7* bitch (takes 2 inputs):

\[16|\]32>|[32&4294967295#|0^[16|-281474976710656&]32]15&131070|[31&281470681743360|]16&4294901760|[16#|0&4294967295|[32&281474976710655]32;<|[16/

User:Helen - Proving required operations

In order to be Turing-complete, a language requires a very minimal instruction set (as shown by Raúl Rojas) and infinite tape (by definition).
The required instruction set consists of: load, store, increment, zero and unconditional branching.
If a language can do these and can simulate an infinite tape (or an arbitrarily sized tape), it is Turing-complete.

In order to show these are true, we must envision the bitch accumulator as a tape.
Cells are delimited portions of the tape that are n-bits wide (n being an arbitrary choice of integer width).
Addressing is signed, negative numbers meaning backwards in the tape and positive numbers meaning forwards in the tape.

Loading and storing values can't really be emulated in bitch but a parallel of setting and accessing cells can be made.

Setting cells

Setting a cell to a constant value c can be done:

#c

However, this zeroes the rest of the accumulator which impends simulation of a tape.
Therefore, we opt for:

|c

Given that the cell at position 0 is equal to 0, this will set load the value c.
However this is dependant on the cell being 0, which is not desirable.
There are two options: zeroing the cell at position 0, or moving right/left (arbitrarily) until we find an empty cell.
I choose to zero the cell in order to be able to maintain a knowledge of where things are positioned in memory.
Furthermore, this doesn't impend one from storing the value at position 0 elsewhere and therefore it is the best option.


With c being the desired constant value and n being the bit-width of the cells in the tape:

|((1<<n)-1)^((1<<n)-1)|c

For the sake of simplicity, brackets represent an expression which must be calculated and put in the source code.
If n is 8, the code would become:

|255^255|c

Accessing cells

A value at positive position p can be accessed through the following code:

](p*n)

A value at negative position q can be accessed through the following code:

[(q*n)

Incrementing

Incrementing is simply the addition of 1.
It is also currently not implemented or working.

Incrementing can be done by checking each bit, starting from the rightmost bit going left, until we reach a 0 bit.
That 0 bit becomes 1 and all the bits lower than it become 0.

In order to check individual bits we must OR individual bits with 1 and check for equality with the previous version.
In pseudocode:

number <- input
previous <- number

n <- 0

while(number | (1<<n) = previous) {
    n <- n+1
}

number <- number | (1<<n)
number <- number ^ ((1<<n)-1)

The algorithm is currently a work in progress.

Zeroing a cell

This was done in the section on setting bits but will be restated here with some explanation.
Zeroing can be done by ANDing with a number that has the desired bits - the bits you want to zero - zeroed and all other bits as ones.
However, due to the arbitrary size of bitch's accumulator, we cannot rely on this (well we can but I choose not too since calculating differences of powers of two is more tedious than changing one digit).
Therefore, OR-XORing seems to be most suitable.
We can OR to set the desired bits to 1, then XOR to "flip" them, effectively zeroing them. If we want to zero the cell at positive position p with cell size n, we use the following:

]p|((1<<n)-1)^((1<<n)-1)

If we want to zero the cell at negative position q with cell size n, we use the following:

[q|((1<<n)-1)^((1<<n)-1)

Both of these will end with the tape positioned over the cell you just zeroed.

Unconditional Branching

bitch lacks in unconditional branching.
It is possible to infinitely repeat code but that is all you can do unconditionally.

>code<

Alternative memory structures

The accumulator-storage combination may be quite limiting in certain circumstances.
In this case, some alternative memory structures are implemented below in bitch for ease.

Cell-based tape

A cell-based memory can be emulated in bitch.
All that is required is an arbitrary choice of bit-width for the cells, and a size (in cells) of the currently allocated memory.
We will call the bit-width of cells n and the size of the memory a.

In terms of structuring in bitch's accumulator-storage conjunction, the memory will be in topmost storage (closest to the accumulator) and the accumulator will remain empty to be used for calculations.


Loading the value of a cell

Given a cell indexed p (in a 0-based index where the leftmost cell is 0 and the rightmost is a-1), we can set the accumulator to its value by the following steps.

Copying up the entire memory to the accumulator:

|[(a*n)

ANDing the cell that we want:

&(1<<((a-p)*n)-1<<((a-p-1)*n))

Then shift copying it down to the bottom of the accumulator:

|]((a-p-1)*n)&((1<<n)-1)

Combined:

|[(a*n)&(1<<((a-p)*n)-1<<((a-p-1)*n))|]((a-p-1)*n)&((1<<n)-1)

Setting the value of a cell

Given a cell indexed "p" (in a 0-based index where the leftmost cell is 0 and the rightmost is a-1), we can transfer the value from the accumulator to the cell by the following steps.

Moving the accumulator value to the correct position to line up with p:

|[((a-p-1)*n)&(1<<((a-p)*n)-1<<((a-p-1)*n))

Clearing the old value:

[(a*n)|(1<<((a-p)*n)-1<<((a-p-1)*n))^(1<<((a-p)*n)-1<<((a-p-1)*n))

Copying down the new one:

|](a*n)

Clearing the accumulator:

](a*n)&0

Combined:

|[((a-p-1)*n)&(1<<((a-p)*n)-1<<((a-p-1)*n))[(a*n)|(1<<((a-p)*n)-1<<((a-p-1)*n))^(1<<((a-p)*n)-1<<((a-p-1)*n))|](a*n)](a*n)&0

Bitwise operations on two cells

Performing bitwise operations with two cell arguments is trivial now that we have the load and set.
We can use the ^^ combination to perform arbitrary argument operations without messing with the storage.
Given an operation x, we can load two cells 0-indexed p and q and then perform q x p.

Loading cell p:

|[(a*n)&(1<<((a-p)*n)-1<<((a-p-1)*n))|]((a-p-1)*n)&((1<<n)-1)

Loading cell q:

[((a+1)*n)|]((a+1)*n)]((a+1)*n)&0|[(a*n)&(1<<((a-q)*n)-1<<((a-q-1)*n))|]((a-q-1)*n)&((1<<n)-1)[((a+1)*n)|[((a+2)*n)]n#|0]((a+2)*n)&0[(2*n)

Performing binary operation x:

^^x]n]n&0[n

Combined:

|[(a*n)&(1<<((a-p)*n)-1<<((a-p-1)*n))|]((a-p-1)*n)&((1<<n)-1)[((a+1)*n)|]((a+1)*n)]((a+1)*n)&0|[(a*n)&(1<<((a-q)*n)-1<<((a-q-1)*n))|]((a-q-1)*n)&((1<<n)-1)[((a+1)*n)|[((a+2)*n)]n#|0]((a+2)*n)&0[(2*n)^^x]n]n&0[n

[Disproven] Conjecture: bitch cannot copy one bit from memory to another part of memory while retaining existing data

edit: The specific memory copy claim/challenge of this conjecture has been successfully disproven, multiple times over. The more general 'Not TC' part still seems valid, but I'm changing the section heading so it's clear what the 'Disproven' relates to. Section still has value because it prompted useful progress in determining computational class Salpynx (talk) 23:42, 20 May 2019 (UTC)

I was following TuxCrafting's (now struckthrough) arguments against TC above, and I don't think the ability to chain instructions helps that much (because they can't be grouped), and the challenge "to see an algorithm that can copy a bit in the accumulator to a different position" is still open.

There are multiple ways to formulate this as a challenge, but this is the simplest I could come up with:

Create a bitch program that can take an already initialised 3 bit accumulator abc and convert it programatically to abb:

abc => abb

examples:

011 =>  011  (3 => 3)
101 =>  100  (5 => 4)
110 =>  111  (6 => 7)

(As I read it) There is no way to group or order the evaluation of chained operations, and no temporary scratch space to use different parts of the accumulator for a single operation. The accumulator cannot be practically masked in such a way that one sub-part of it can interact with another and re-stored, without losing sub-parts not involved in the operation.

Here is the full mapping of expected results:

0 -> 0
1 -> 0
2 -> 3
3 -> 3
4 -> 4
5 -> 4
6 -> 7
7 -> 7

A simple algorithm to do this for three bits is:

(abc & 0b110) | ((abc & 0b010) >> 1)

This requires grouping and two separate logic operations on the accumulator (abc) as sub steps. Since bitch modifies the accumulator in place, no amount of instruction chaining will help. The logic operations are only useful when combining the accumulator with number literals or user input.

Conjecture:

  • A machine with two independent stacks should be able to copy a value from one stack to the other. A machine that cannot copy a single bit in isolation from anywhere in memory to anywhere else, regardless of the actual datastructures involved, has no hope of simulating two stacks.
  • There is no way to create a generalised algorithm to perform simple bit copy in bitch, therefore the language cannot simulate two independent storage stacks or accumulators.
  • It may be possible to produce the results above in bitch by creating effectively a lookup table mapping the input to desired output, but that proves lack of Turing completeness if the only way to produce arbitrary results is to create lookup tables mapping input to output.
  • The ability to copy data from one memory location to another with destroying all other data is required equally for any TC system involving multiple stacks, multiple counter registers, and equally to cyclic tag systems and cellular automata.

If I have somehow missed something in the spec, proving this false does not necessarily mean the language is TC. If there is a clever way to do the copy, I imagine it will require effectively placing an upper bound on the accumulator, and multiple unbounded stacks or accumulators cannot be simulated from a bounded one.

I believe this makes bitch a PDA, since one end of the accumulator can be used as a single stack, and transitions decision can be made based on that.

This talk page is getting hard to follow with many sections having questionable statements without attribution. Hopefully this bit copy challenge will be easier to address than a full TC proof. Salpynx (talk) 23:48, 7 May 2019 (UTC)

I didn't notice the implemented memory structure was supposed to be bitch because it has parentheses () to group operators (which AFAICT bitch lacks, and is the major hurdle to completing this challenge in pure bitch) and standard bit-shift operators << mixed in with bitch syntax. I am guessing that the intent was to precalculate all those values and substitute them into the bitch source as number literals? This is what I have noticed about all the examples given, and what this challenge is trying to prove: that binary operations cannot be done on two independent memory locations. It is easy to perform an operation with memory + number literal, but if you have to compute it all before hand and hardcode it into bitch, it's not bitch doing the computation, at least not in a way that is TC. I tried substituting values into that code and couldn't get it to work, can you? The form for this challenge will be something like \[]/ where [] is replaced with a bitch algorithm of any size which copies the bit. If it is easy to do, then I have clearly misunderstood something about how the storage behaves and an example would clear up the confusion. Salpynx (talk) 12:08, 8 May 2019 (UTC)
In theory you can do any terminating amount of pre-computation "by hand" (which really means as part of compiling into your language). (From Oerjan's user talk). --A (talk) 04:02, 9 May 2019 (UTC)
Copying a bit of a 3 bit accumulator can be achieved using the follwing piece of code: &6]3|[2&1[3|]3&7 (which also keeps the storage unmodified). One key point here is that chained instructions work on a fresh copy of the state (both accumulator and storage), so |[2 and similar instructions leave the storage unmodified.
For the record: ^^]1]1^^[1[1 achieves the same copying task (copy second bit to first bit) for any number of bits, using the ^^ trick. Note that this approach specific to copying adjacent bits; copying to a non-adjacent location is harder. Int-e (talk) 17:28, 10 May 2019 (UTC)
More generally, any map with finite domain (old accumulator value to new accumulator value) can be implemented, as for example the ROT13 program on the main page. I don't see how to copy a bit in an unbounded accumulator though, so none of this really brings us any closer to TC-ness. This should be enough to show that bitch can simulate arbitrary PDAs though. For processing input, note that one should be able to read from input without destroying the storage using &0|\, but the Java interpreter doesn't really support this because each instance of Program creates its own new Scanner that reads a whole line of input... Int-e (talk) 05:43, 9 May 2019 (UTC)
The algorithm is right there! For larger sizes, the shifts have to be adapted as well: &14]4|[3&1[4|]4&15. Int-e (talk) 17:07, 9 May 2019 (UTC)
Congratulations! This is a valid solution to the challenge as I stated it. I was surprised it works, but I can now see how, which is a welcome outcome. I think knowing the conditions under which this is possible is helpful too. Like I guessed it places bounds on the storage, and proves the language is at least computationally equivalent to a PDA. Perhaps there is a way to break though this limitation, but it makes the next steps clearer, and provides a clear example of a useful programming technique. Salpynx (talk) 00:03, 10 May 2019 (UTC)

This conjecture is disproven by the old alternative memory structures section as I understand it.
The variables there are simply the number of cells, the size of the cells and the cell to access.
The number of cells and size of cells is simply for customizability and would be fixed by the programmer to a value of their choosing for the entire program (e.g.: 20 and 8).
The position variable is simply the position at which the cell is that you want to access.
The expressions given are simply the values needed to move and zero parts of the accumulator as necessary, they work with any values.

Binary operations on two different locations are very possible and they are the only reason for chaining operators.
The code snippet &[8 will AND two different values given that they are both 8 bits wide and in the accumulator together, one on top of the other.

The tape memory structure doesn't rely on precomputing, it allows for different sized tapes and different sized cells for customizability.
The values for the tape code are no less arbitrary than the value 8 in the above code. - Helen (talk) 19:23, 9 May 2019 (UTC)

I seem to have reinvented parts of the "alternative memory structures" section below for GET and PUT, though as far as I can see the main contribution, random access, is new. I don't know why A deleted that section... Int-e (talk) 21:10, 9 May 2019 (UTC)
I tried to use your cell based memory structure early on but couldn't get it to work, here is a worked example of what I get:
With 3 bit memory, each cell is 1 bit, and the task is to copy cell 1 to the accumulator (which is slightly different from my challenge, but probably equivalent given some more steps), my abc is 012 in this structure.
  • a = 3 (memory size)
  • p = 1 (target cell, 0-indexed)
  • n = 1 (cell width)
Loading a value from a cell formula, substituting using the values above: |[3&4|]1
I understand that this requires the memory to already be in the storage section, so if I pre-load the storage to 0b010 using #2]3, I would expect to get 1 in the accumulator by running #2]3|[3&4|]1, but that results in 0 in the accumulator and 2 in storage. Trying it with storage initialised to 0b111 (7) results in 0b110 (6) in the accumulator and 0b111 (7) in storage. Perhaps the formulas don't work with a cell width of one, but I don't know under which conditions it does work. I don't think that section should be deleted, it's possible am missing something about how to use the structure properly, or that there is a tiny bug in one part of it that could be identified and fixed as a result of using it for a specific task. This challenge was an attempt to get clarity on how to use the theorised techniques to do a specific task that did not seem obviously trivial to me. I would love to see an working example of your cell-based memory structure applied to the task above, I did try but couldn't make it work. The addition example (I think that is yours too?) was great as I could easily verify it did exactly what it claimed, and then spend time trying to understand how. Salpynx (talk) 00:20, 10 May 2019 (UTC)
Yeah sorry for the lack of any examples.
Given your above values:
  • a = 3 (memory size)
  • n = 1 (cell width)
  • p = 1 (target cell, 0-indexed)
The following code snippet stores and retrieves 1 from the 2nd position (2 line <pre> tag meant I couldn't indent; anyone who can format well is free to do this over):
#1|[1&2[3|2^2|]3]3&0|[3&2|]1&1/
This is made up of two main segments:
The storage segment:
|[1&2[3|2^2|]3]3&0
The retrieval segment:
|[3&2|]1&1
Be aware that the TIO is currently not up to date and you'll need to use the interactive version or download the GitHub version.
Hopefully the TIO will be updated soon. - Helen (talk) 09:09, 11 May 2019 (UTC)
The solution to the original problem (directly copied and evaluated) would be:
]3|[3&2|]1&1|[0&1[3|1^1|]3]3&0[3
Which can be shortened to:
]3|[3&2|]1&1[3|1^1|]3&7
- Helen (talk) 09:24, 11 May 2019 (UTC)
Thanks Helen, that example does indeed fulfil my challenge using your cell based memory structure, congratulations! I see now how it works, that the memory structure is good, and that you had figured out usable memory before I proposed my challenge. Apologies for doubting you!
Bitshift operator precedence seems to come after arithmetic operators in most programming languages (C, python, js), so when I used that formula, copied directly from Bitch#Loading_the_value_of_a_cell section, 1<<2-1<<1 = 4 which is what I tried for "ANDing the cell that we want" is different from (1<<2)-(1<<1) = 2, which looks like what you intended in the example. That threw me early and I stopped digging deeper. If you don't mind, I'll try to follow the rest of the example and update your Bitch#Loading_the_value_of_a_cell section if I can make it clearer? Salpynx (talk) 23:26, 20 May 2019 (UTC)

Sketch: A RAM Machine

We sketch a bitch implementation of a machine with a 32 bit accumulator A (stored in bits 0..31 of the accumulator), a 32 bit extended accumulator X (bits 32..63), optional quick-access registers, and up to 2^26 - 1 individually addressable words of RAM. So even if bitch is not TC, it still appears to be a capable programming language. --Int-e (talk) 17:41, 9 May 2019 (UTC)

I've used this machinery to make a finite memory (so non-TC) Brainfuck interpreter; see https://github.com/int-e/bitch/blob/master/examples/brainfuck.pp and the resulting https://github.com/int-e/bitch/blob/master/examples/brainfuck (not thoroughly tested, may have bugs) --Int-e (talk) 22:47, 16 May 2019 (UTC)
In case other users get confused of how that preprocessor works, I clarify that the preprocessor takes input from the console and outputs the resulting bitch program. --A (talk) 12:10, 20 May 2019 (UTC)
How is it supposed to work? (I have inputted a brainfuck program and could not get it to work...) Does it take a program from input or whatever?
From the .pp file: "[The p]rogram is separated from input by an exclamation mark (!)." Both the program and all input are read from stdin. A possible pitfall is that the exclamation mark is obligatory. --Int-e (talk) 14:56, 17 May 2019 (UTC)
I will be testing your implementation using ,[.-[-->++<]>+], which makes sure that a brainfuck interpreter supports all commands, nested loops, and unbounded memory. Testing halting is trivial...
Although it is quite slow, the implementation is flawless. --A (talk) 06:51, 18 May 2019 (UTC)
I've implemented a (hopefully not too buggy) bitch interpreter in C++ which is a bit faster (in the same github repo) --Int-e (talk) 13:24, 21 May 2019 (UTC)
I have an idea: you could extend the implementation to be unbounded by supplying tape extending operation, if this machinery supports extending the memory.
First of all, extending memory after initialization is hard, perhaps even impossible, because the design is based on each memory cell containing its own address. (Without that stored address, `poke` doesn't work.) Shifting the existing memory would invalidate the addresses stored in that memory. Oh and I wouldn't know where to even begin doing such an extension without a loop.
Note also that addresses are bounded. The current implementation uses 16 bit words, both for data and the addresses. Addresses need to be divisible by 32, so at most 2^11 - 1 = 2047 addressable memory cells are supported, and that's what the interpreter provides (minus memory used for stack, program, and registers).
For the particular application as a Brainfuck interpreter, one could change the memory cell layout to use, for example, a 24 bit address and 8 bit data, increasing the address space to 2^19 - 1 = 512287 cells, but that's a non-trivial effort. An easier approach would be to set the word size to 32 bits (BB=5), with up to 2^27 addressable memory cells. --Int-e (talk) 14:56, 17 May 2019 (UTC)
Can the word size be set dynamically? If that is a yes, bitch would be Turing-complete. --A (talk) 10:26, 22 May 2019 (UTC)
Not with this design; the word size is hardcoded in all of the building blocks (initialization, reading and writing registers, reading and writing memory, arithmetic). --Int-e (talk) 11:17, 22 May 2019 (UTC)
Wow, the macro replacements can act as variables!!! See this program(which seems to be an abuse of this preprocessor):
A=1,
,A,
A=2,
,A,
This program prints 12, which is 1 and 2 combined. This can demonstrate that the word sized can be set dynamically in the preliminary stage; however, there is currently no way to implement unbounded cells using this way.

This will not work if you want to extend the cell length though:

CNT=4,
BB=CNT,
CNT={BB+1},
BB=CNT,
,BB,
--A (talk) 13:04, 22 May 2019 (UTC)
I am aware about the loop limitations...the maximum 63 loop nesting would be enough, as two loop nestings are sufficient for Turing-completeness ( refer to this page). --A (talk) 14:35, 18 May 2019 (UTC)
I meant bitch's limitation of not supporting any nested loops. But you have a point—the same machinery that supports the Brainfuck loops can also support nested loops in general. So the fixed, bounded address size is the real limitation here. --Int-e (talk) 13:03, 18 May 2019 (UTC)
Anyway, nice work!--A (talk) 13:47, 17 May 2019 (UTC)

Memory

The storage is used for memory. Addresses are 32 bits, the lowest 6 of which are always 0. Each word occupies 64 bits in memory that store the address of the word and the word itself value.

The first few addresses may be used as additional registers instead, at the programmer's discretion.

Operations

None of these operations use > or <, so we can use them in loops.

The operations are not clean for the ; conditional in the sense that the accumulator may become 0 while an instruction is executed. It should not be too difficult to accomplish that, with some extra care. (The motivation would be that operations that are clean for ; can be used as part of a state machine that manages the control flow.)

  • load immediate n (clears X):
   IMM n := ^^n
  • 32 bit decrement (assumes X = 0):
   DEC :=
     ]32^1
     [32^]32]32&[32
     [31^]31]31&[31
     ...
     [2^]2]2&[2
     [1^]1]1&[1
     &0[32
  • 32 bit increment (assumes X = 0):
   INC :=
     ~]32&0[32
     DEC32
     ~]32&0[32
  • load a register (assumes X = 0, n divisible by 32):
   GET n := ^^[n]32&0[32
  • store a register (assumes X = 0, n divisible by 32):
   PUT n := [n&-4294967296|]n]n
  • read a word at address in accumulator:
   # comment format:      # accumulator | reverse storage
   PEEK :=                #         A | ... A W ...
     ^^[^32               # A ... A W | ... A W ...
     ]32&0[32             #         W | ... A W ...
  • store word at address (<2^32 - 8) in accumulator, with value given in upper 32 bit of (extended) accumulator:
   # comment format:      # accumulator | reverse storage
   POKE :=                #        X A | ... A W (A+64) ...
     [&4294967295         #        X A ... A | W (A+64) ...
     [32                  #        X A ... A W | (A+64) ...
     &-4294967296         #        X A ... A 0 | (A+64) ...
     ^^[32                # X A ... A 0 (A+64) | (A+64) ...
     ^]&4294967295        # X A ... A X   ??   | (A+64) ...
     ^^]32                #        X A ... A X | (A+64) ...
     ]32                  #        X A ... A | X (A+64) ...
     ]&4294967295         #        X A | ... A X (A+64) ...
  • load extended part of accumulator (n divisible by 32) for stores:
   GETX n :=
     ]32
     GET (n+32)
     [32
  • clear extended accumulator:
   CLEARX := &4294967295
  • arithmetic with extended accumulator:
   XOR := ^]32
   AND := &]32
   OR  := |]32
   NOT := ]32~]32~[64
  • initialize memory (only needed once, at the start of the program, so using > < ; is okay):
   INIT :=
     >DEC&-64]64|[64;<[64


  • Bitwise right shift accumulator A:
   RSH n := ^^]n
  • Bitwise left shift accumulator A:
   LSH n := ^^[n
  • Swap the two accumulators:
   XCHG := ^]32^[32^]32



Here are some extra instructions I find in Assembly language that might improve the usability of the instruction set(some are very trivial):
  • Shifting operations (cyclic left shift, cyclic right shift) --A (talk) 08:04, 12 May 2019 (UTC)
I have a question about the IMM instruction. Couldn't you directly use ^x(which works) instead of ^^x? --A (talk) 10:19, 10 May 2019 (UTC)
The point is that ^^n sets the accumulator to n independently of the previous value of the accumulator. You could achieve the same effect with &0^n, but that would be one more character. Int-e (talk) 14:06, 10 May 2019 (UTC)
I think I don't understand the [n^]n]n&[n operations in the decrement implementation. I believe it does carrying, but I can't understand how it achives it. --(this comment by A at 14:49, 10 May 2019 UTC UTC; please sign your comments with ~~~~)
Have you tried working out a small example, say a 4 bit decrement starting from 12 = 1100b? Int-e (talk) 17:20, 10 May 2019 (UTC)
Oh right. Sorry. --A (talk) 01:14, 11 May 2019 (UTC)

Sketch: A Stack Machine

We sketch another machine with an accumulator (with 64 bits) and a stack(preserved in the storage).
This machine simulation in bitch is created merely to extend the power of bitch programs.
Note: If mixing this stack machine with the RAM machine above is neccesary, the two accumulators must be preserved somewhere inside the storage before using the commands.--A (talk) 11:54, 13 May 2019 (UTC)

Memory

The memory is just an accumulator (stored in the bitch accumulator) and an unbounded stack (stored in the accumulator storage).

Operations

All of the commands do not use loops.

  • Duplicate the top of the stack
DUP := ^^[32]32
  • Push a number onto the stack
PUSH x := ^^x]32
  • Pop a number
POP := [32^^0
  • Pop a number, outputting it
POPOUT := [32/^^0
  • Swap the top 2 elements of the stack
SWP := [64^]32^[32^]32]64
  • Push input
PUSHI := &0|\]32
  • Increment top of stack
INC := 
     [32~]32&0[32
     DEC
     ~]32&0
  • Decrement top of stack
DEC :=
     ^1
     [32^]32]32&[32
     [31^]31]31&[31
     ...
     [2^]2]2&[2
     [1^]1]1&[1
     &0
  • Check whether two top-of-stacks are equivalent
CHK := ^^[32^[64]32
  • Check if top of stack > the second top of stack
  • Check if top of stack < the second top of stack
  • Implementation of +, -, *, /, modulo, etc (impossible? *, /, and modulo possibly requires nested loops)
  • Reverse the stack (impossible?)

Page content deletions

I'd like to point out that with MediaWiki's awful default diff, there's hardly anything worse for following talk page changes than deleting content. Also deleting other people's talk page contributions, or even your own after others have responded to them, is generally considered inappropriate, except on your own personal talk page.

We generally approximate Wikipedia policy here when we don't have a specific different tradition, so I'd sum up my understanding of what is done there: The recommended way to delete content from talk pages that are too big is to only move full sections to an archive subpage (like Talk:Bitch/Archive 1), and only when there has been no discussion in that section for a reasonably long time (which might depend on how fast discussion is happening).

--Ørjan (talk) 00:38, 10 May 2019 (UTC)

Sorry, I read that Salpynx noted that this page is too hard to follow, so I removed some of the content. --A (talk) 11:33, 10 May 2019 (UTC)

Sketch: A Turing Machine

We can implement a Turing machine on a binary alphabet. The main challenge here is to implement bitwise boolean operations that keep (most of) the accumulator intact. It turns out that this isn't too hard if one pads the data with sufficiently many always-zero bits. --Int-e (talk) 20:53, 23 May 2019 (UTC)

Is the state represented using 1 bit? Also, what does k and q mean? (You don't have to explain the meaning of k:l and r:q; I know them.)--A (talk) 10:19, 24 May 2019 (UTC)
The state is represented by as many bits as needed for the given Turing machine (the second s in ss indicates a plural; this convention comes from Haskell...). k is an adjacent letter to l and q is adjacent to r; I went down (in the alphabet --Int-e (talk) 12:59, 25 May 2019 (UTC)) rather than up to avoid confusion with s. --Int-e (talk) 12:00, 24 May 2019 (UTC)
I don't know how that can "avoid confusion". Hmm, l is directly adjacent to s, which might arise confusion: l could be considered part of the ss. Also, you wrote I went down rather than up to avoid confusion with s.; what does I mean? I did not find I anywhere in this sketch.
The alphabet (adjacent letters represent adjacent bits). And "I": a name I call myself. Salpynx (talk) 08:37, 25 May 2019 (UTC)

A Turing machine is characterized by a transition function that takes a state and a tape value, and produces a new tape value, a new state, and whether to move the read head left, right, or not at all before the next step. If we encode states as binary strings, the transition function can be computed by a Boolean circuit.

We store the left part of the tape and the current state in the accumulator, and the right part of the tape in the storage. In order to implement the Boolean circuit, we use only every third bit of the accumulator/store data, encoding the bit b as 0 b 0. Let us write <b> for the encoding of a bit, that is, 0 b 0. For lists of bits, <x0...xn-1> expands to <x0>...<xn-1>. In addition to storing bits as triplets, we use the lowest bit of the accumulator as a one-bit accumulator A.

We define some basic operations:

  • Set, clear, or negate A.
 STA      := |1
 CLA      := &-2
 NOTA     := ^1
  • Set, clear, or negate bn.
 SET(n)   := |{1<<3*n+1}
 CLR(n)   := &{~(1<<3*n+1)}
 NOT(n)   := ^{1<<3*n+1}
  • Store A into bn. Note that the first ^[{3*n+1} copies a lot of bits in the left tape part of the accumulator to unused bits (that are supposed to be 0); these copies are cleaned up again by the final &]{3*n+1}.
 STORE(n) := SET(n) ^[{3*n+1} NOT(n) &]{3*n+1}
  • Or A with bn. This is a slight variation of LOAD. This follows the same basic principle as STORE, but we need to add some scratch space first because otherwise the cleanup &[{3*n+1} would destroy 3n precious bits.
 OR(n)    := ^^[{3*n+1} |]{3*n+1} |1 &[{3*n+1} ^^]{3*n+1}
  • Load A from bn.
 LOAD(n)  := CLA OR(n)

These operations are sufficient for encoding arbitrary Boolean circuits (we have negation, binary or, and arbitrary fan-out). Hence we can program a circuit that takes as input a Turing machine configuration of the form

 ... <k> <l> <ss> <0> <0> <0> <0> <r> | <q> ...

where ...:k:l is the left part of the tape, r:q:... is the right part of the tape, and ss is the current state as a string of bits, possibly padded to make room for scratch bits. The circuit produces as output

 ... <k> <l'> <ss'> <0> <0> <0> <0> <r> | <q> ...     # if standing still
 ... <k> <l'> <r> <ss'> <1> <0> <0> <0> | <q> ...     # if moving left
 ... <k> <ss'> <0> <0> <0> <1> <l'> <r> | <q> ...     # if moving right

where ss' and l' are the new state and tape symbol, respectively. The actual move of the tape head can then be performed by

 ADJUST   := ]9 LOAD(0) [&3 [3 LOAD(0) ]&3 [6 &{~((3<<3)|(3<<12))}

"Etymology"?

Interestingly, the word "bitch" has been mentioned as early as 2002, even though the actual language was created in 2019. This is a log from December 14th, 2002:

22:37:12 -!- navigator has quit ("BitchX: the NEW form of birth control!").

--A (talk) 12:20, 26 May 2019 (UTC)