1mpr0mp2

1mpr0mp2 is an event-driven language created by User:Quintopia in 2010 to serve the very small niche of creating one-bit media in hardware. It shall, perhaps, one day get compiled to arduino assembly language and tested, but it is, for now, only an idea.

Overview
An 1mpr0mp2 program consists of a list of events, which are used to control a (finite) list of pins (hardware-dependent), an arbitrary length list of 8-bit accumulators, and an arbitrary size bit array. The MAIN event is automatically triggered upon start of execution. Execution proceeds by polling the event queue for events than can be executed at the current moment. Execution ends when the event queue is empty.

I/O
P0,P1,P2,...,Pn refer to I/O pins. They are pre-defined events which, upon execution, flip the state of the corresponding pin. Such events may not be scheduled for input pins. (see Input section)

Data
A0,A1,A2,...,An refer to 8 bit unsigned accumulators. The number of these is only limited by available memory.

M0'0, M0'1, M0'2, ... M0&#39;n

M1'0, M1'1, M1'2, ... M1&#39;n

M2'0, M2'1, M2'2, ... M2&#39;n

.

.

.

Mn&#39;0, Mn&#39;1, Mn&#39;2, ... Mn&#39;n

refer to an arbitrary size two-dimensional bit array, limited only by available memory.

Indices
It is also legal to index into the bit array, or specify accumulators, using other accumulators as indices, e.g.:

MA1'A2

AA0

Sub-indices are allowed to any level of recursion (AAAAA0).

Event Queue
The event queue is an array of 256 linked-lists ("accumulator array"), plus an array of linked lists corresponding to the input pins ("input array"). There is also a single list of events to be processed immediately ("immediate list"), and a list of events to be processed when the bit array is updated ("M-list"). Each accumulator is actually a pointer to a list in the accumulator array. Events may appear in many of these lists if their execution depends upon a combination of simultaneous conditions. When events are created, they are placed at the end of any lists which their triggering conditions may correspond to. The actual execution of the event handler is described below.

Atomic Events
Every command is the creation of an event, and the insertion of an appropriate atomic event or event macro pointer into the event queue. The following are the available atomic events that user event macros can trigger:


 * P0, P1, etc.: Referencing the name of an output pin toggles its current value from low to high or vice versa. Input pin events cannot be created by the program: Dependent events are only executed when their state is changed by external input.
 * CP0, CP1, etc.: The C prefix indicates that the output pin should be driven low. Again, this is only legal for output pins.
 * M0'0, M0'1, etc.: Referencing a bit in the bit array sets that bit to 1
 * CM0'0, CM0'1, etc.: The C prefix sets the bit to 0.
 * A0,A1,A2, etc.: Referencing an accumulator creates an event that increments that accumulator.
 * CA0, CA1, CA2, etc.: Adding the C resets the accumulator to zero.

One specifies to the compiler the size of the bit array and the number of accumulators one intends to use at the beginning of the program, as such:

asize 300;    @use 300 accumulators

msize 500,500; @use a 500x500 bit array

Macros
Macros are user-defined events. Every program begins by execution of the MAIN macro event. The syntax for specifying an event macro is:

define EVENTNAME { EVENT1; EVENT2; *EVENT3 [condition]; EVENT4 [condition2]; }

Defining a macro with the same name as a previously defined macro is legal, and will associate that name with the new macro (without affecting any instances in the event queue of the previous macro to hold that name). Macros may also be renamed, like: "define MACRO1 MACRO2". Macros may not define or rename macros. It is also possible to create an unnamed macro and place it directly on the event queue, e.g.:

{EVENT1;EVENT2;EVENT3} [EVENT1];

There are a few things to note here. First of all, all event names are in all-caps by convention and begin with a letter. There is no limit on event name length, but be reasonable, you know? EVENT1, EVENT2, etc. could be either atomic events or other macros. Scheduling an event by preceding it with * has the semantic meaning: schedule this event only if it does not already exist. In other words, without the *, you can schedule the same event to occur at the same point in time as often as you like. This is a good way to add a large amount to a single accumulator (although a better way would be to implement a faux binary adder, if you plan on doing this a lot). Finally, note that some events are scheduled with conditions. These conditions consist of arithmetic expressions, and determine where in the event queue the event will be placed for efficient scheduling.

When scheduling conditionally, the event handler makes this guarantee: the event will be triggered after the first time the conditional expression evaluates to true. Allowed logical operations are the following: & (and),| (or),~ (not),^ (xor). Allowed arithmetic operators are the following: +,-,*,/,% (all having the same semantics as in C). The convention for conversion from numbers to booleans is as follows: Positive numbers are false, everything else is true. All precedence is applied as in C, with arbitrary use of brackets allowed to modify precedence. (To get an idea of this convention, note that ~-1 and 0&1 are false, and 1|-1 is true.) Valid operands are any numbers in the range [0,255], the names of any input pins (P0, P1, etc.), the names of any accumulators (A0, A1, etc.), and the names of any bits in the bit array (M0'0, M1'0 ,M0'1, etc.)

Event Scheduler/Handler
Heavy implementation details follow:

The event scheduler/handler can be thought of as running in cycles, though these cycles will of necessity not correspond to actual processor cycles.

Upon receiving an event to schedule, the scheduler checks if there is a condition attached. If not, it schedules it to occur on the next cycle by placing it on the immediate list. If so, it evaluates the condition and places a new event in all the possible places in the queue where it could be executed. For instance:

FOO [(A0+8)%64]

would place a FOO event in the accumulator array at locations 56, 120, 184, and 248, which would be marked with the condition "must be executed by A0" and

BAR [A0-A1&A1-A0]

would place an event everywhere in the accumulator array, along with conditions, for each location x, "must be executed by A0 or A1" and "A1 = A0".

If the condition depends ONLY upon the value of a bit in the bit array, it will be placed on the "M-list" dedicated to such events.

(In actual fact, the event itself is not duplicated into all of the lists in which it is placed, but rather a pointer to it is placed in all these places. This, in combination with an internal reference counting table with bit flags specifying which events are active and which are already done and marked for deletion, allows events to be removed from the queue upon execution in amortized constant time.)


 * 1) At the beginning of each cycle, the handler first executes all events in the immediate list, keeping track of any changes to any accumulators changed and whether the bit array has been changed.
 * 2) Next, if any input pins have been driven since the last cycle, it searches their corresponding lists for events that can be run and executes them (still watching for new accumulator or bit array updates).
 * 3) Then, it does the same for the lists to which accumulators that have just been updated point to in the accumulator array (still watching for accumulator and bit array updates).
 * 4) Finally, if the bit array has been updated, it runs through the list of events in the M-list and tries to run those events (still watching for accumulator and bit array updates).
 * 5) Steps 3 and 4 are repeated until no more changes can be processed.
 * 6) Idle until next cycle begins.

Each cycle is guaranteed to occur at set intervals, so it may be that time runs out on a given cycle. There is a dedicated period at the end of each cycle which is spent deferring events that should have been executed now to the next cycle by placing them at the END of the immediate list. Their conditions are checked right then before they are placed there, as events on the immediate list are never condition-checked. Note that if there are consistently more events to process than there is time to process them, there will eventually be more events on the immediate list than there is time to process them. As the handler never gets to check on any conditional events in this situation, it will begin dropping events. This will eventually lead to the inability to do anything useful at all. Since this is most likely the result of programmer error, the scheduler will halt and hold all output pins indefinitely high in the situation that on some cycle it does not reach the end of the immediate list. This is like the program waving a white flag and saying "I give up!"

Contradictory Changes
It may be that some cycle contains many events that toggle a pin along with many events that clear it. The same is true for bits in the bit array.

The way this is handled is as follows: the parity of the number of toggles is remembered for the output pins along with whether or not a clear occurred for that pin. If a clear did occur, then the pin is set to the recorded parity. If not, then the recorded parity is XOR'd with the pin's current state.

For the bit array, no attempt is made to collect the various sets and clears for any given element, since they are not expected to remain in state for an entire cycle. Each set lasts only until some other event clears it. The programmer should take care to ensure that events are not competing if they want data in the bit array not to get clobbered.

Optimizing
The event scheduler/handler should be not stupid. If it gets a second request to toggle P0 on the next cycle, it should delete the first rather than creating a new event. The same goes for bits in the bit array. And for that matter, for incrementing an accumulator more than 255 times in a cycle, if some program was silly enough to do that.

Of course, for some events it may not be clear at the time of adding them that they are cancelling the effects of already-scheduled events. Creating efficient data structures to predict when this might be happening could be an interesting line of research.

Input
One specifies which pins will be used for input at the beginning of the program as follows:

input P2,P3,etc.;

This changes the semantics of those pins as indicated above.

The scheduler/handler reads polls these pins once at the beginning of each cycle and remembers their value for the whole cycle.

Output
The scheduler/handler processes all changes to pins before actually changing their state once at the end of each cycle.

Comments
@ causes the remainder of the line to be ignored

@@ causes everything up to the next @@ to be ignored

Example
This does something like toggle pins P0 and P1 at a constant frequency for a short period only after a button connected to P2 has been pressed:

input P2; asize 3; msize 0,0;

define MAIN { CA0; CA1; CA2; COUNTTIME; TRACK; DEBOUNCE [P2]; }

define TRACK { A1 [A0]; TRACK [A0]; }

define TRACK2 { A2 [A0%4]; TRACK2 [A0%4]; }
 * CA2 [A2%4];

define COUNTTIME { A0; COUNTTIME; }

define DEBOUNCE { CA2; TRACK2; DEBOUNCE [~P2]; }
 * START [-A2+1];

define START { FOO [(A2+2)%4]; }

define FOO { P0 [A2]; P1 [A2]; FOO [(A2+2)%4&~A1]; }
 * MAIN [A1];

Complexity Class
This language is as powerful as a linear-bounded automaton. If one could specify the size of the bit array or number of accumulators to be infinite, it would be Turing-complete.

Proof: We want to construct a bounded "successor machine" (by e.g. Schönhage). Thus we need to show that we can construct arbitrary size registers (although not infinite: we only want the bounded version), and implement the following 3 instructions:


 * CLR (r): CLeaR register r. (Set r to zero.)
 * INC (r): INCrement the contents of register r.
 * JE (rj, rk, z): IF the contents of register rj Equals the contents of register rk THEN Jump to instruction z ELSE continue in sequence.

One can construct a register of arbitrary size by concatenating several accumulators as such:

define CONCAT { AA1 [AA0]; CONCAT [AA0]; }

Thus, the accumulator pointed to by A1 becomes the higher-order bits, and that pointed to by A0 the lower-order bits of the same register. Notice that this one macro constructs arbitrarily large registers if one increments A0 and A1 and calld it again.

Incrementing this register is easy. The AA0 event primitive does it. (This is regardless of the size of the concatenated ad hoc register.) Thus, we have a correlate to the INC (r) instruction.

Clearing the register is just a matter of clearing all of its component accumulators, by repeated incrementing A0 and calling CAA0 until A0 has reached the value of the highest accumulator in the register. Thus, we have a correlate to the CLR (r) instruction.

Finally, rather than numbering our "instructions", as the successor machine does, we will name each instruction by placing it in its own macro event, and having it call the next "instruction's" event at the end.

In this way we can implement JE (rj, rk, z) as follows, with A0 beginning at the lowest order accumulator of rj, A1 at the lowest order accumulator of rk, and A2 initialized to zero. Here, n represents the size of the register and Z is the name of the event containing the instruction to jump to, while NEXT is the next instruction's event.

define JE { A2; NEXT [~(AA0-AA1)&A2-n-1]; Z [n-A2]; JE [AA0-AA1&A2-n-1]; }

This event will raise itself for n cycles, each time comparing the next highest-order accumulator of the two registers. If at any time they are not equal, it raises NEXT instead of itself. If it has raised itself n times, it raises Z. This is exactly the behavior we expect of JE (r1, r2, z).

Since a bounded successor machine is implementable in 1mpr0mp2, it is as powerful as a linear-bounded automaton.