From Esolang
Jump to navigation Jump to search
Paradigm(s) imperative
Designed by Tomhe
Appeared in 2021
Memory system Cell-based
Dimensions one-dimensional
Computational class Bounded-storage machine
Major implementations github
Influenced by BitBitJump, TOGA computer, Subleq
File extension(s) .fj

FlipJump is a 1-instruction language, intending to be the simplest / most-primitive programming language.
Yet, it can do any modern computation (#The FlipJump Power, #The Standard Library).

As the name implies - It Flips a bit, then Jumps (unconditionally). The fj op has 2 operands:


a and b are both addresses of bits, and it is equivalent to:

not *a; jump b

Understanding the instruction

Take a look at the next (64bit) program:

1000;256  // addresses 000-127
32;446    // addresses 128-255
128;256   // addresses 256-383

The CPU starts executing at address 0, so it will flip the 1000th bit and will jump to address 256.
The CPU now executes the op 128;256, so it will flip the 128th bit (so, the 2nd op will be overridden to 33;446) and will jump again to address 256.
The CPU will get stuck in a self-loop here, as it jumps back to 256 forever.

The FlipJump CPU

The fj CPU has a built-in width, and starts executing from address 0.
It halts on a simple self-loop (jumps to itself, while not flipping itself).

There are variants of the CPU, let's assume the simplest form:

  • The jump address is always w-aligned.
  • The operation doesn't flip bits in itself.

Here is a 8bit-fj-cpu C emulator:

#define SELF_FLIP 2

int fj8(u8* mem) {
    u8 ip = 0;

    while (true) {
        u8 f = mem[ip/8];
        u8 j = mem[ip/8+1]; 

        if (ip % 8)  
            return BAD_ALIGNMENT;
        if (f >= ip && f < ip+16) 
            return SELF_FLIP;
        if (...) {
            // handle IO  (will be explained next).
        if (ip == j) 
            return SUCCESS_FINISH; 

        mem[f/8] ^= 1<<(f%8);   // Flip
        ip = j;                 // Jump

The Assembly Language

Declaring constants & labels

x = 6;
// The fj program starts with 1 predefined constant - w, which is defined with the address-width (word-width). 2^w is the memory size in bits.


Syntax sugar

 ;J  =>  0;J                                                  - Just jump (it does flip the address 0).
F;   =>  F;$     ($ is the address of the next instruction)   - Just flip.
 ;   =>  0;$

Flipping a whole word

wflip dst, val            - Assembles to multiple dst+i; ops (for 0<=i<w which the i'th bit is 1 on val).
// wflip 128, 69 will be assembled to 128+0; 128+2; 128+6; (actually, something equivalent, it's the assembler's choice).
wflip dst, val, jmp_addr  - Same as above, but jumps to jmp_addr at the end.

// The wflip op is promised to take 1 op-size in its local area (and if more ops are needed - they will be at padded spots and at the end of the current segment).


pad n          // a special assembly-op that fills the current address with arbitrary fj ops, until the address is divisible by (n*dw)
label:         // label % (n*dw) is 0


def macro_name param1, param2.. @ temp_label1, temp_label2.. < globals.. > externs.. {
    // macro body

// For example:

def not4 x {

def notnotjump x y jumper {

def skip_bits jump_bits @ end_label {
    ;end_label + jump_bits

// if BLA1,BLA2 where not labeled as externs - a warning would come up.
def declarations > BLA1, BLA2 {

// if C was not labeled as a global - a warning would come up.
def flipC < C {

// Using macros
not4 100
notnotjump a, w, b
skip_bits 2*w


rep(n, i) macro_name arg1, arg2, ..    // repeats the macro n times, each with i = [0, 1, .. n-1].
                                       // n is an expression and may contain constants and labels of previous addresses.

// For example, building not4 in a simpler way
def not x {
def not4 x {
    rep(4, i) not x+i


ns namespace_name {
    // Any variables, labels, macro definitions declared here - will have the namespace prefix.
    // Access inner definitions with the "." prefix.
    ns nested_namespace {
        // Nested namespaces (to any level) are allowed!
        // Access nested_namespace definitions with the "." prefix, namespace_name definitions with the ".." prefix, 
        //  and so on (the number of leading dots, the number of namespaces to go upwards, inc. the current).
    // A namespace can also contain code.
ns namespace_name {
    // You can append things to an already defined namespace!
// Access namespace_name definitions from outside with the "namespace_name." prefix

// For example:

ns my_ns {
    X = 7
ns my_ns {
    def foo dst {
    ns inner_ns {
        Y = 8
        Z = .Y + ..X
        def inner_foo dst {
            ..foo dst+.Z

my_ns.foo v
my_ns.inner_ns.inner_foo v
v + my_ns.inner_ns.Z + 2*my_ns.X;
loop: ;loop
v: var w, 446

Segments & Reserve

// some code
segment 0x10000    // The code below will start from address 0x10000
// some code

reserve 0x400      // Will reserve a spot for 0x400 0-bits. They will be filled by the running environment.
// The .fjm file (the assembled file) allows specifying segment-length > data-length, and fills the remaining memory with zeros.

The end of the previous segment will be followed by the fj ops needed to complete that segment's wflip ops.

Assembly-time expressions

Many mathematic and logic operations are allowed between numbers/constants/labels at assemble time:
temp + 2*(b-temp) - 13/4 ; temp & 0x67 + 0b00110   
    // You can use the labels (will be resolved to numbers at assemble-time) and many operations to make the flip/jump addresses:
    //  Mathmatical:  +- */% ()  
    //  Logical:  &|^  operations
    //  Shifts  << >>
    //  C-like trinary operator  ?:
    //  Bit-width operator  #  (minimal number of bits needed to store this number.  #x == log2(x)+1).
    // Also you can use hexadecimal (0x) and binary (0b) numbers, and get the ascii value of chars ('A' == 0x41).
    // The value of a string is the number built of its bytes ("TomH" == 'T' + 256*'o' + 256^2 * 'm' + 256^3*'H')

temp:  ;

b+'A';b+0x41    // The flip and jump addresses are identical in this line.

c+('A' > 'B' ? 1 : 'B');temp+(1<<(c-b))   // Is equivalent to c+0x42;temp+1

Memory - how can we implement variables?

A bit can be built using 1 fj operation. Specifically, ;0 or ;dw (defined in the standard library as 2*w).

Here the magic happens. The FlipJump operation inherently can't read. It also can't write a specific value. All it knows is to flip a bit, and then jump. but where to?

The FlipJump hidden-power lies exactly in this delicate point. It can jump to an already flipped address. In other words, it can execute an already modified code.
I based the main standard library functions, and the implementation of variables, on this exact point.

Follow the next example to get the important understanding of this concept:

//Let's assume a 64 bits CPU, and that the label branch_target is evaluated to 0x400 (1<<10).
// Follow the {0}, {1}, ... {9} numbers to follow the execution flow.

    ;code_start // {0} code starts at address 0

// {1} We can flip the address in the bit_a opcode by the branch_target address (0x400):
    bit_a+64+10;    // The +64 is to get to the 2nd word (the address word), and the +10 is to flip the bit corrospondig to 0x400.
// {2} If we jump to execute the opcode in bit_a, it will flip address 0 and then jump to the address written in it. 
//      So it will jump to 0x400, which is branch_target.

// {5} We will flip the address in the bit_b opcode by 0x400:
// {6} Now we jump to execute the opcode in bit_b. It will flip address 0 and then jump to the address written in it. 
//      So it will jump to 0x480 (was 0x80 from the start), which is second_branch_target.

branch_target:          // This is address 0x400
    // {4} Now we get here, and then continue jumping.
second_branch_target:   // This is address 0x480
    // {8} Another jump.

end:  ;end  // {9} The code will get here and then finish (self-loop).

bit_a:  ;0      // {3} Jump to branch_target
bit_b:  ;0x80   // {7} Jump to second_branch_target

The same flip/jump combination on bit_a/bit_b did different things.
We successfully jumped to different addresses depends on the value of the said bits.
In that way, we can read the value of such a bit-variable.
Yep. By jumping to different addresses (based on the bit-variable's value) - we indeed, read its value.

In that same way, we can also implement hexadecimal variables in a single op (implemented in hexlib.fj).
Instead of two options (;0 or ;dw) we will have a bit more (0dw,1dw,2dw,3dw,...,15dw / 0dw,1dw,2dw,3dw,...,9dw).

This is very nice, but it only worked because we knew the address of branch_target in advance. We usually don't, but it is resolved during assemble time.

That's why the assembly language provides the wflip operation.

wflip bit_a+w, branch_target    // This will work for every branch_target address, not just 0x400.

Of course, it is important to make the same wflip again just after the jump to bit_a.
We assume that the jump-part of these bit-instructions is 0 / dw, so after each wflip we must wflip again, to set it back again to 0 / dw.

Input / Output


Output is done by flipping a special address.

2*w;    => will output 0
2*w+1;  => will output 1

To output an ASCII character - output the 8 bits of it in lsb-first order.

For example, the next code will output 'T' (0b01010100):


The standard library defines a simpler output macro:

dw = 2*w
def output_bit bit {
def output_char char {
    rep(8, i) output_bit (char>>i)&1

// Output becomes easy as:
output_char 'T'


The next input bit is always loaded at address 3w+#w (3w+log(w)+1), reloaded with each time it's read.

You can use this bit by jumping to a flip-jump opcode that contains it. The best way is to jump to ;dw.

In that way - this bit will reflect either 0x0 or 0x80 in the jump-part of the flip-jump op.
If we wflip dw+w, some_padded_address, dw - the dw-flip-jump-op will make a jump to some_padded_address / some_padded_address+0x80, based on the input, just like in the Memory section.

// For example:

    wflip dw+w, padded_address, dw  // we assume dw+w is 0.

pad 2    // special assembly op, used to assure that padded_address is divisible by 2*dw.

    wflip dw+w, padded_address  // we make sure dw+w stays 0.
    // do some 0's stuff

    wflip dw+w, padded_address  // we make sure dw+w stays 0.
    // do some 1's stuff

The iolib.fj standard library defines macros to make IO as simple as possible.

The Standard Library

I Implemented the standard library for the language, found in the Github STL Page
It features some constants (runlib):

* dw   = 2 * w    // double word size
* dbit = w + #w   // bit-distance from variable start to bit value: w + double-w-width (log2(2w))

But mostly, it features useful macros, starting with the basic (runlib):

* startup       - the first macro in your code - creates the IO label and handles the initial flow.
* startup_and_init_all - replaces "startup", it also initializes anything the stl needs.
* fj f, j       - a basic macro for the FlipJump op - f;j
* wflip_macro   - a basic macro for the WFlip op - wflip dst, val (, jmp_addr)
* comp_if expr, l0, l1      - jump to l1 if expr!=0 else jumps to l0
* comp_flip_if bit, expr    - flip bit if expr!=0
* skip          - skip the next instruction.
* loop          - self-loop (finish executing right here).
* output_bit        - for outputting a constant bit.
* output_char ascii - output an 8-bit constant, like output_char 'T'.
* output str        - output a string constant, like output "Hello, World!".

The standard library uses 3 main namespaces, and no macro is declared outside these namespaces. The *bit*, *hex* and *stl* namespaces.
Each namespace offers many macros related to that variable type.
For example, the stl offers the bit.mov macro for moving bit-variables, and hex.mov for moving hex-variables.
The stl uses the stl._, bit._, hex._ namespaces for its inner-macros, used only by the stl.

Note that the hex macros are faster than the bit macros.

The list of macros might be partial. Check the list in The stl Github Page, and the stl files themselves too.

basic macros (defined under bit and hex namespaces):

* if x, l0, l1  - reads the variable x, and jumps to l0/l1 based on its value (0/not0).
* xor dst, src  - flips the dst variable by the src value.
* mov dst, src  - copies the value of src into dst.

Mathematical/Logical macros (under the bit namespace, under stl/bit/ - also similar macros can be found under the hex namespace):

* add n, dst, src    - adds the n-bit number (n consequative bit-variables) src to dst.
* sub / inc / dec / neg
* xor / and / or  / not
* shr / shl / ror / rol

Andvance mathematical macros:

// The bit namespace (bit/mul.fj, bit/div.fj):
* mul10 n, x         - multiply the n-bit x by 10.
* div10 n, x         - divide the n-bit x by 10.
* mul n, dst, src    - multiply the n-bit src by dst, and save the result in dst.
* div n, a, b, q, r  - divide the n-bit a by b, save the result in q, and the remainder in r.
* idiv n, a, b, q, r - same as div but for signed numbers.
* div_loop / idiv_loop / imul_loop  - the loops implementations are much smaller in size, but a bit slower.
// The hex namespace ([x] is the hex-width (1/4 of the bit-width) of the number) (hex/mul.fj, hex/div.fj):
* add_mul n, res, a, b   - res[n] += a[n] * b[1]
* mul n, res, a, b       - res[n]  = a[n] * b[n]
* div n, nb, q, r, a, b, div0  - divides the unsigned a[:n] by b[:bn], save the result in q[:n] and the remainder in q[:bn] (jump to div0 if b[:bn]==0).
* idiv n, nb, q, r, a, b, div0  - same as div but for signed numbers.

Memory/Comparison (under the bit namespace):

* vec n, value             - initialize a n-bit number with value value. 
* zero n, x                - zero's n-bit-variables, starting from x.
* mov n, dst, src          - copies the n-bits src into dst.
* swap n, a, b             - swaps the n-bits a with b.
* xor_zero n, dst, src     - xor's the n-bits dst by src, then zero's the n-bits src.
* cmp n, a, b, lt, eq, gt  - compares the n-bits a/b, and jump to right address (lt for a<b, eq for a==b, gt for a>b).
* if n, x, l0, l1          - jumps to l0 if the n-bit x is just zeros, and to l1 otherwise.

IO for variables (input/output, under the bit/hex namespace):

* input dst           - read a char (8-bits) from input, and save it in dst.
* output x            - output the bits in that variable (1 for bit, 4 for hex/dec).
* print x             - output a char (8-bits) from the 8-bit variable x.
* print_as_digit x    - output an ascii representation of this variable.
* print_str n, x      - output n characters starting from x, stops if reaches a null byte ('\0'). 

Hexadecimal math/logic macros (under the hex namespace - these macros are usually faster than the bit ones):

* hex.init (required exactly once to use many of the macros below)
* vec, xor, zero, xor_zero, mov, swap, xor_by, set
* not, or, and, shl_bit, shr_bit, shl_hex, shr_hex, count_bits
* inc, dec, neg, add, sub
* if_flags, if, cmp, sign
* print_int, print_uint

Casting and formated-printing (under casting, output, ...):

* bit2hex, hex2bit
* bin2ascii / dec2ascii / hex2ascii
* ascii2bin / ascii2dec / ascii2hex
* print_hex_int / print_hex_uint
* print_dec_int / print_dec_uint

Pointers, Stack, and Functions - The hex/pointers/ folder - handles w/4-hex variables as pointers to the memory:

* ptr_init        - declare exactly once to use the pointers macros (it's declared in the startup_and_init_all macro).
* call address    - calls a function (pushes the return address to the stack, takes only 1 bit-variable space).
* return          - returns from a function (pops the return address from the stack, and jumps to it).
* fcall / fret    - fast call / fast return (with a constant place to store the return address, other than a stack).
* ptr_inc / ptr_dec / ptr_add / ptr_sub - increase/decrease the pointer.
* stack n         - initializes a stack with n bit-variables.
* sp_inc / sp_dec / sp_add / sp_sub     - increase/decrease the stack pointer.
* push_ret_address / pop_ret_address    - push/pop the return value to/from the stack.
* {push/pop}_{hex/byte} - push or pop a single hex/byte to/from the stack, as a single stack-cell.
* push [n] / pop [n]  - push and pop variables from the stack

More pointer macros (under the bit/hex namespaces) - handles w-bit variables / w/4-hex variables as pointers to the memory:

* ptr_jmp  ptr    - ;*ptr
* ptr_flip ptr    - *ptr;
* xor_to_ptr / xor_from_ptr   - like ``xor *ptr, var`` / ``xor var, *ptr``. also with {n}.
* ptr_flip_by ptr, value  - wflip *ptr, value

Read/Write whole hex/byte from a w/4-hex pointer (read_pointers/write_pointers):

* read_hex {n}
* read_byte {n}
* write_hex {n}
* write_byte {n}

Lookup Tables

The hex/dec macros are based on "lookup-tables" (take a loop at hex.exact_xor in stl/hexlib.fj).
Some of these tables are small enough to be inside a macro definition, but others are just too big (hex.or.init in stl/hexlib.fj for example).
The latter are initialized by using the hex.init / dec.init macros.
The idea is simple. Based on the parameters - jump to the right entry in the table, which will set the result variable to hold the right value.

The use-flow of the big-tables is as follows:

1. Set the jump part of a fj op (dst) to the start of the padded table, flipped by the parameters' values, to reflect the right table-entry.
   For example - flip dst+dbit+{0-3} by the first hex-param, and flip dst+dbit+{4-7} by the second hex-param.
2. Save the return address in another op (ret), and jump to dst.
3. The table-entry will cause bits to flip, such that a result variable (res) will have the right result.
4. Set the flipped dst-bits back to 0, and jump to ret.
(small tables are the same, except that the return address is known in advance - so no need in ret).

The big-tables are initialized once, and can be used anytime and anywhere - it's a way to save up some memory.

The .fjm Format

The FlipJump executable file is saved as a .fjm file. It holds all the memory segments and data. In some versions, the data is compressed.

The format itself can be found under fjm_consts.py.

The .fjm file currently has 4 versions:

0. The basic version
1. The normal version (more configurable than the basic version)
2. The relative-jumps version (good for further compression)
3. The compressed version

Hello World

Using the stl (source):

output "Hello, World!\n(:"
Hello World in FlipJump
Hello World

Using nothing (source):

def startup @ code_start > IO {

def output_bit bit < IO {
    IO + bit;
def output_char ascii {
    rep(8, i) output_bit ((ascii>>i)&1)

def end_loop @ loop_label {

    output_char 'H'
    output_char 'e'
    output_char 'l'
    output_char 'l'
    output_char 'o'
    output_char ','
    output_char ' '
    output_char 'W'
    output_char  'o'
    output_char 'r'
    output_char 'l'
    output_char 'd'
    output_char '!'

How To Run?

For a more extensive guide, see the Github README.md.

Download flipjump:

pip install flipjump

Assemble and Run your programs with the `fj` command line tool:

>>> fj hello.fj
Hello, World!

You can also assemble and run separately:

fj --asm
fj --run
calc.fj assembly
Run the compiled calculator

You can also debug (single step, read memory, read flipjump stl variables, stop after 10/100 ops, continue):

Debugging in flipjump. Placing breakpoints.

And test:

Running pytest --regular
Running tests

Current list of FlipJump assemblers:

Current list of FlipJump interpreters:

The FlipJump Power

FlipJump is the product of searching for the simplest / most-primitive / weakest instruction set, and see what can be done with this power.

It seems like a lot can be done!

A screen recording that was taken from the calc.fj program, making multiple mathematical calculations, and printing integers:

Calculations using only FlipJump

A screen recording that was taken from the series_sum.fj program, calculating an arithmetic series:

Caculates an arithmetic series
Arithmetic Series

A screenshot from the func.fj source, making a function call:

Function call using the FlipJump Standard Library
Function Call

A quine (a program that prints itself) using only 99 ops of code + 448 ops of data, fully documented

A compiler from a RiscV 32bit machine code to fj 64bit code is planned current source.

Comparison to similar languages


BitBitJump is the closest language to FlipJump, with its unconditional jump.

Yet, FlipJump Is more basic/primitive than BitBitJump.

  • BitBitJump can copy a bit from one general address to another.
  • BitBitJump can write zeros and ones directly to an address, without knowing the old value.

Both can't be trivially done with FlipJump, and no implementation ever succeeded in doing that (try to wonder about how to implement it..).
Flipping a general memory bit in BitBitJump is possible, and jumping unconditionally is possible as well (with 0 0 jump_address).

TOGA computer

Toga is close to FlipJump as well. It flips a bit but then has a conditional jump.

FlipJump is clearly more basic/primitive than Toga, as the conditional jump has the power of reading any bit in memory (and so is writing). Also, an unconditional jump can be easily implemented in TOGA using 2 instructions.

External resources

- Github - Macro Assembler and Standard Library, and 2 interpreters - The flipjump Python Library

See Also