General blindfolded arithmetic
Blindfolded arithmetic is a programming language where there are a finite but arbitrary amount of expressions which can operate on six variables. Programs are a list of ordered assignment expressions which always execute, all inside an infinite loop. General blindfolded arithmetic is a generalized approach to the problem of ascertaining the computational class of these sorts of languages.
Core model
A blindfolded model has some number of operators. For instance, canonical blindfolded arithmetic has the operations +
, -
, *
, //
(truncating division). The model also has some number of variables which are initialized to some value at the start. All variables have values which belong to some specific set, usually either ℤ (as in canonical BA) or ℚ (as in Imprecision). Finally, there is also an ordered list of assignment instructions, which can be in the form A = B
(copy the value of B into A) or A *= B
for any operator *
(operate A * B
then put the value into A). Canonical blindfolded arithmetic is equivalent to this, and general expressions can also be decomposed into this system using storage variables. Additionally, there is some halt condition. This can either be the result of a variable being assigned some value, or an operator being used on some certain value.
General to canonical
A = B is A = B + 0 or A = B - 0 or A = B * 1 or A = B / 1 A *= B is A = A * B for any operation *
For canonical to general, v = a * b
for any operation * can be decomposed like so:
T = a T *= b v = T
As such, canonical blindfolded arithmetic is an instance of general blindfolded arithmetic where there are the operators +
, -
, *
, //
, variable values are in the set ℤ, there are a maximum of six variables, and the halt condition is when A //= B
is run with B = 0.
ℤ with +
, -
, *
(at least FSM)
Consider the general system where there are an unlimited number of variables which belong to the set of integers. It does not seem immediately obvious how to do much at all. However, the multiplication instruction with the collapsing behavior of A * 0 = 0
allows for the implementation of NAND. If values are in the set {0, 1}
then a * b
is equivalent to AND:
a b a * b 0 0 0 0 1 0 1 0 0 1 1 1
Still under the use of variables in the set of 0 and 1, boolean inversion can be done using 1 - a
:
a 1 - a 0 1 1 0
This allows us to construct NAND with 1 - a * b
.
We can decompose any stateful logic boolean system into a stateless one which feeds into itself at the very end. This can be done in a number of ways, but one possible option is to view the logic machine as being between two unlimited sets of pins. The bottom pins serve as inputs from the last iteration, and the top pins serve as outputs to the next. In between, since we have NAND, we can construct any feed-forward boolean logic circuit. Then, using the iteration feedback, we can hold onto arbitrary state. Since the pins must be addressed explicitly in the program (since they are simply variables), and the program is finite, this construction by itself is in the class of finite state automata.
Of course, the variables are not in the set {0, 1}
. We can however force them to be by only ever using the NAND construction and having 0 and 1 be the only constants, but that is not required. Indeed, we can infinitely increment and decrement any number of variables we'd like depending on the state of the FSM, suggesting that it is possible to implement counter machines. The existing FSM construction would serve as the control structure, with the increment/decrement variables being the counters. We can already conditionally increment counter += 1 NAND some_cond
and decrement counter -= 1 NAND some_cond
the counters. The only question is if we can somehow read the state of the counters.
If we are to continue using the NAND construction in our counter construction, then we would somehow need to construct some counter system where there are infinitely many reachable states which map to a certain value, and a single reachable state which maps to a different value. These values should be in the set {0, 1}
. If such a construction exists, it likely would involve multiple variables and the use of hysteresis.
This requirement that infinite states map to a single one, and there is one other identifiable one can be referred to as normalization or collapse. There are many different collapsing operators we could add to this system to immediately make it Turing complete using counters. If we have some operator norm
which ignores the left hand side and normalizes the right A norm= B
being A = norm(B)
which returns 0 if B is zero, and 1 otherwise, for B in the nonnegative integers, testing the value of a counter and operating the FSM based on it is trivial.
Immediately Turing complete additions
The following are definitions of the norm function using common functions/operators (each line is equivalent):
The signum function
sgn(a) sign(a) signum(a)
C-style comparisons (where true is 1 and false is 0)
a != 0 1 - (a == 0) a > 0 1 - (a < 1) a >= 1 1 - (a <= 0)
Truncating division (such that 5//2 = 2)
todo, see canonical
Absolute value (almost)
This shifts the counter over. -1 is now the 0 value.
Further, the function either returns 1 for 0... or -1 for the 0 value.
It's not immediately obvious how to map this negative logic to the normal 0/1 logic, however, it can be done with division.
1 - (abs(x) - x)
Absolute value with any division
This works by (-1, 1) -> (0, 2) -> (0, 1) due to 0/2 = 0, 2/2 = 1.
((1 - (abs(x) - x)) + 1) / 2 (2 - abs(x) + x) / 2
Constructible is able to implement the above with sqrt(x * x)
as abs(x)
.
In general, if there is some function which produces either -1 or 1 then adding 1 and dividing by two results in 0 or 1, with -1 → 0, 1 → 1.
Known constructible functions
Any periodic sequence can be reproduced by this instance of GBA. To do so for a sequence with period n, construct a binary counter which properly loops after n. Then create indicator variables which use NOT and AND to return 1 only if a certain value is in the binary counter. These indicators indicate if the current index mod n is some residue. For each m of the n terms, create a variable which multiplies the indicator for that index with m. Finally, add up all of these together. The final addition is the current value of the sequence. Using this, any periodic function of an infinite counter can also be constructed. When the counter is incremented, increment the periodic function's binary counter. When the counter is decremented, decrement the binary counter. Potentially notable periodic functions of a counter may include modulo.
ℚ with +
, -
, *
, /
(at least FSM)
If division is added there are some interesting properties that hint at potential solutions to the normalization problem. The division operator here is rational division, and is nearly closed except for 1 / 0
which is undefined and programs which utilize it are invalid. The semantics between this instance of GBA and Imprecision are identical.
Of course, from the prior section, arbitrary stateful boolean logic can be constructed, allowing for the construction on any FSM we desire. The issue of reaching infinite readable state remains.
Consider that x / x
= 1 for all x other than 0. Additionally note that x * (1 / x)
has the same values. We may be able to break a counter variable into two, the counter and anticounter, such that counter * anticounter = 1 for nonzero counter, and 0 for zero counter. Incrementation could be accomplished with:
counter += 1 anticounter = 1 / counter
Since counter is never -1, this always works. Decrementation is more difficult. We want some ℚ² function which has the following mappings:
(0, k) → (0, j) (for any k/j) (1, 1) → (0, k) (for any k) (2, 1/2) → (1, 1) (3, 1/3) → (2, 1/2) (n, 1/n) → (n-1, 1/(n-1))
Assume we had some function which for 0, 1, 2, 3, 4... was 0, 0, 1, 1, 1... Call this function indicator. If we had such a function then the counter could be decremented according to the mapping:
indicator = magic_indicator_fn(counter, anticounter) decrement = counter * anticounter # don't decrement if at 0 counter -= decrement anticounter = indicator / (counter + (1 - indicator))
So how can this indicator be constructed? Observe the differences between counter and anticounter
counter 0 1 2 3 4 5 ... anticounter 0 1 1/2 1/3 1/4 1/5 ... difference 0 0 3/2 8/3 15/4 24/5 ...
The only time the difference is nonzero is when the indicator is active. Now consider repeated squaring of these values, according to
difference = difference * difference difference = difference * difference difference = difference * difference ...
For some large number of repetitions. Since there is a suitably large number of repetitions, the values greater than one tend to infinity, with the ones at 0 of course remaining there:
squared 0 0 ∞ ∞ ∞ ∞ ...
We can then perform a number of operations:
1 - x 1 1 -∞ -∞ -∞ -∞ ... 1 / x 1 1 -ɛ -ɛ -ɛ -ɛ ... 1 - x 0 0 1 1 1 1 ...
Which is the indicator function. Decrementation could then be done as:
indicator = 1 - (1 / (1 - (counter - anticounter)**k)) decrement = counter * anticounter counter -= decrement anticounter = indicator / (counter + (1 - indicator))
For some constant power k, which we can achieved by iteratively squaring counter - anticounter
.
If k is "infinity" then this works, the value of the indicator function is exactly as we require. If k is simply "some large constant" then there will be error introduced. This error depends on the distance from the 0 point, with lower numbers having worse error. The question of if this specific system is Turing complete depends on the question of if the error can be controlled in programs which may increment and decrement arbitrarily, and if the logic indicator function counter * anticounter
has a low enough error for the logic system to reliably operate as the FSM. Ideally, error would able to converge to some zero value.
With pow
If there were a nonnegative pow
operator (also often called **
or ^
), the k term in the above could be variable and incremented, doubled, squared, or even exponentiated every iteration of the program. Experimentation suggests this could be convergent and could interface with the existing logic system.
Potentially this could even be done without a pow
operator. Perhaps the to-the-power-k term could be taken to some constant power every iteration of the program. However, if it is reset every decrement then the k is mostly just some constant.
A potential argument for why a variable power k would be Turing complete is that the error produced by the FSM and decrement should only ever get worse by some constant each iteration. We therefore expect the error function to be linear. The pow operation with a k that increases superlinearly should outpace the linear error. This argument might also be useful to show that this method does not work for a constant k, since linear error should outpace a constant.
If the operator accepted negative right arguments then it would be Turing complete by implementation of abs by sqrt.