Bouncy Counters
Bouncy Counters is an esoteric programming language created by User:ais523 in 2024, as a reversible counter machine that is a Turing tarpit.
ais523 has been known to remark that it is hard to prove Turing-completeness of reversible counter machines, because they are bad at emulating anything other than other reversible counter machines; Bouncy Counters is intended for use in such proofs. The specific language that inspired its creation was Turn Left, although it was intended to also be useful for other reversible counter machine proofs.
Storage model
Bouncy Counters is based around a number of counters that can hold unbounded nonnegative integers. If an attempt is made to decrement a counter below 0, it becomes -1 for a brief moment, but the value immediately bounces back up to 0. (In particular, if you attempt to decrement 0, the counter's value remains at 0, but the most recent operation on the counter is considered to have been an increment operation.) Counters have names, which are nonnegative integers; the name of a counter is relevant only in that it provides a way of referring to the counter, and has no effect on the program execution otherwise.
There is no limit on how many counters can exist in any given program, but the set of which counters exist for any given program is fixed by the program and does not change at run time. The program specifies which counters exist (by specifying their names), and what their initial values are.
Syntax
Bouncy Counters has a line-based syntax. Four sorts of line are acceptable in the program source code:
- A comment line, on which the first non-whitespace character is
#
, which is ignored and does nothing. - A blank line, consisting only of whitespace, which is treated the same way as a comment line.
- A counter definition, of the form
counter=value
, which specifies that a counter exists in the program, and specifies its initial value. Each counter that is used in the program must be mentioned in exactly one counter definition. Optional horizontal whitespace is allowed at the start and end, and adjacent to the=
. - A side definition, of the form
side side
. A side is an identifier (consisting of letters, digits and underscores), followed by+
or-
; additionally, the longest suffix of the identifier that consists entirely of digits must name a counter. (For example,A1B23+
is a side that refers to counter 23.) The two sides in a side definition must be separated by a positive amount of horizontal whitespace, and horizontal whitespace is also allowed (but not required) at the start and end.
There is a restriction on the side definitions: if a side appears anywhere in the program, it must appear exactly once among the left-hand sides of side definitions, and exactly once among the right-hand side of side definitions. (It is not allowed to have a side that appears more than once, or not at all, on the left or on the right.)
Semantics
In Bouncy Counters, instead of being a pointer to a location in the source program, the "instruction pointer" is a side. Execution proceeds by alternating between the following two steps:
- Counter change
- Change the counter referred to by the current side, by subtracting 1 (if the side ends with
-
) or adding 1 (if the side ends with+
). If the counter bounces, then the-
in the instruction pointer becomes a+
. - Next side
- Find the side definition that has the current side on the left. The current side becomes the right-hand side of that definition.
Starting and stopping the program
Two sides are called counterparts if they are identical apart from the +
/-
at the end. A +
side that has no counterpart is called a start side. A -
side that has no counterpart is called a stop side.
At the start of the program, and immediately after every time the program stops, the implementation looks at the list of start sides that refer to counters with value 0. If no such start sides exist, the program halts. If exactly one exists, execution starts from that side. If more than one exists, the implementation asks the user which one to start at. Execution starts with a "next side" action, not a "counter change" action (as though the non-existent counterpart of the start side had bounced).
Stop sides represent places where program execution could stop: if a stop side's counter change bounces, then program execution cannot continue (because no side definition refers to its counterpart). If this occurs, then the implementation tells the user which stop side the program stopped at. The implementation then makes another attempt to find a start side to start at, as described in the previous paragraph, and halts only if it can find nowhere to halt.
This start/stop behaviour is not required for Turing-completeness, and rather exists as a method of modelling I/O and of easily testing programs. Implementations that do not want to implement it can take the start side as an argument, and halt after bouncing to a stop side's counterpart; this can be seen as a variation of Bouncy Counters that is still Turing-complete.
Reversibility
Bouncy Counters is reversible; given the current side and the value of the counters, it is possible to determine what the previous side was and how it changed the counters, meaning that a program can be run backwards until its start (at which point it observes a counter having bounced from a -
side that does not exist in the program).
There is a simple way to change a program to its reverse version: swap all +
and -
in the program, and swap the LHS and RHS of every side definition.
Computational class
With five counters
Imagine a generalisation of Bouncy Counters in which the restriction on the side definitions were loosened to allow a side to appear on the RHS of multiple side definitions, and/or to appear on the RHS but not LHS. That generalisation is just an alternative syntax for a Minsky machine, which is known to be Turing-complete. As such, to prove Bouncy Counters to also be Turing-complete, all that is needed is to prove that the restriction can be worked around.
In order to allow a side to appear on the RHS of two side definitions, what is needed is a "merger subroutine", that can be entered using two sides but always leaves through the same side, and can be used any number of times. That way, it would be possible to replace the two uses of the same side with the entry sides of the merger subroutine, and define the exit side of the merger subroutine to have the original side as its next side.
This can be proved by giving an explicit implementation of a merger subroutine:
# Enter via: A1- or B1- # Exits via: C1+ # counter 1 must be 0, for the entry to work 1 = 0 # but counter 2 and 3 can have any value 2 = 0 3 = 0 A1+ C2- C2- A1+ B1+ C3- C3- B1+ C2+ C3+ C3+ C1- C1- C2+
This merger subroutine functions as follows:
- If the subroutine is entered by bouncing
B1-
, it adds the value of counter 3 to counter 2, restoring counter 1 back to 0, then exits by bouncingC1-
. - If the subroutine is entered by bouncing
A1-
, it adds 1 plus the value of counter 2 to counter 3, restoring counter 1 back to 0, then exits by bouncingC1-
.
The values of counters 2 and 3 thus act as a bit bucket that remembers where the code came from, allowing an apparently irreversible merge of control flow via storing a full history of it in order to maintain reversibility.
The ability to use a merger subroutine to merge two sides makes it possible to merge arbitrarily many sides, simply by using the merger subroutine multiple times to repeatedly merge two sides into one, until there is only one of the equivalent sides left. Additionally, it is possible for all the mergers to share the same counters (Bouncy Counters is not concurrent, so there is no situation in which two different mergers would be running at once; and any possible state that one merger could leave the counters in is a valid starting state for any other merger), so the mergers collectively only increase the number of required counters by 3.
In order to complete the proof, there is an additional restriction that needs to be loosened: that every side that appears on the LHS of a side definition also appears on the RHS. This restriction would end up naturally being violated by programs compiled from Minsky machines. (For example, imagine creating an infinite loop via connecting the C1 output of a merger to its A1 input with code like C1+ A1-
: C1+
would never appear on the right of a rule, and A1-
would never appear on the left.) The trick here is to notice that the right-hand sides refer to decrements that will bounce every time they occur (e.g. A1-
will always bounce because it is only used to enter the merger, and counter 1 always has value 0 at that time). As such, it is possible to add dummy side definitions with those sides on the left, in the knowledge that those side definitions will never be used and thus their right-hand sides don't matter. The number of "unmatched" sides on the RHS must, mathematically, match the number of unmatched sides on the LHS, so they can be paired up arbitrarily into dummy side definitions in order to create a legal program.
With two counters
It is possible to compile any Bouncy Counters program into one that uses only two counters; because Bouncy Counters is Turing-complete given sufficiently many counters, this proves that it is also Turing-complete even if only two counters are available.
This program demonstrates the general technique used by the compiler:
# counter 1 stores the values of all the emulated counters 1 = 1 # counter 2 is a temporary that is usually 0 2 = 0 # Divide counter 1 by 2 2D2+ 2D1- 2D1- 1P2_1- 1P2_1- 2D2+ 2D1+ A2D2- A2D2- 2D1+ # Multiply counter 1 by 2 2M2+ 2M1- 2M1- 2M2+ A2M2- 1P2_1+ 1P2_1+ 2M1+ 2M1+ A2M2- # Divide counter 1 by 3 3D2+ 3D1- 3D1- 1P3_1- 1P3_1- 2P3_1- 2P3_1- 3D2+ 3D1+ A3D2- A3D2- 3D1+ # Multiply counter 1 by 3 3M2+ 3M1- 3M1- 3M2+ A3M2- 2P3_1+ 2P3_1+ 1P3_1+ 1P3_1+ 3M1+ 3M1+ A3M2-
When starting at 2M2, it will multiply counter 1 by 2, stopping at A2M2 – likewise, starting at 3M2 will multiply counter 1 by 3, stopping at A3M2. When starting at 2D2, it will divide counter 1 by 2 (if it divides exactly), stopping at A2D2, and likewise for 3D2 dividing by 3 and stopping at A3D2. If a division fails because the input isn't divisible, the division will "bounce", leaving the counters the same but stopping as though it had just multiplied; for example, if counter 1 is not divisible by 3, starting at 3D2 will leave the counters the same and end at A3M2. This is accomplished by sharing some of the sides between the division and multiplication routines, so that a bounce during division (at a point that implies the division was not exact) will bounce into the corresponding part of the multiplication routine which will then reverse the division.
This program can be extended to primes other than 2 and 3, and arbitrarily many multiplication/division routines for each of the counters (including generating multiple distinct routines for the same prime). That makes it possible for it to represent a multiple-counter Bouncy Counters program directly: each counter of the emulated program becomes the exponent of a particular prime number within counter 1 of the emulating program, and each side compiles into a multiplication or division routine (a -
side divides by the prime corresponding to the counter being decremented, a +
side multiplies). A side definition can be implemented via renaming an output of one multiplication/division routine to match the input of another, e.g. A3M2
could be connected to 2D2
by writing replacing all copies of A3M2
with 2D2
. The start/stop behaviour otherwise is entirely metacircular (a start side in the emulated program becomes a start side in the emulating program, and likewise for stop sides) – this emulation is not quite perfect because it could provide starting points as options that would not have been choosable as options in the original program, but getting the I/O exactly right is not required for Turing-completeness.