User talk:Rdebath

From Esolang
Jump to navigation Jump to search

SBFI

Hi,

I was going to ask you if you could test my interpreter, but you were faster, so thanks a lot ! Your work is really impressive. I'm a bit disappointed by my score compared to bff4 and bffsree but I'll try to improve it !

Maxdefolsch (talk) 07:17, 2 July 2014 (UTC)

I suspect the 'issue' is your lack of 'simple loops' as you've already noticed. The 'Prime' test and to a lesser extent the 'long' test are improved by full simple loops and the Prime test has an 'if' style loop that seems to be the last of the common local code optimisations.
I am curious as to what the slowdown you have is though? You see you currently have a very good speed on the counter test, that test is basically unoptimisable (by design) and so shows the basic instruction rate of the technology. Yours seems to be running faster than I would expect from the C+Compiled_C hybrid implicit in your conversion of the '[>>]' command. I don't think it's your use of the computed goto, because the I7-950 I use is probably penalising you for that; it seems to have enough functional units to run the extra condition from a 'while-switch' in parallel.
BTW: Your code compiles cleanly except for the gotos (of course) with gcc when you add '-std=c99 -pedantic' and completely cleanly with clang when you add '-std=c99 -pedantic -Wno-gnu'
Rdebath (talk) 19:06, 2 July 2014 (UTC)
Thanks for your analysis. For the slowdown, it's probably in part because my implementation of this optimization was *really* ugly, since I was just trying to see if I could make it work without breaking the rest. I should try to make it cleaner.
Say, I was looking again at my results, in particular this red cell (> 200s) for PIdigits that must bring down my score a lot. Is it really this test : https://github.com/rdebath/Brainfuck/blob/master/testing/PIdigits.b with the input "200" ? I tested it myself and it took about 26 seconds, so that can't be right. I'd like to know what your test was, so I can see if I'm able to improve my performance. Thanks in advance ! --(this comment by Maxdefolsch at 07:11, 4 July 2014‎ UTC; please sign your comments with ~~~~)
The problem with the PIDigits test is that it has to be run using a 16bit interpreter (or larger cell). Because of this, for eight bit interpreters, I run the code through my "bf2bf -dbl12" (Or a different variant if they're slightly faster) before giving it to them. Some of the interpreters are fast enough for this to be a reasonable test run. I haven't mentioned this in the table description yet, but I have just added a 32 bit version of your interpreter.
NOTE: only the first three tests are used to make the 'score' column, anything else I've tried is just too unstable. I have tried to make this this choice representative of the real speed of the interpreter but like all benchmarks you need plenty of salt. --(this comment by Rdebath at 09:33, 4 July 2014‎ UTC; please sign your comments with ~~~~)
Oh, I see. Thanks for adding the 32-bit version, and for the clarification about the score. It's very motivating :P
I wonder how LostKng is implemented since it's so slow with 32-bit cells, does it use something like [+] / -[-] ? Weird. --(this comment by Maxdefolsch at 11:02, 4 July 2014‎ UTC; please sign your comments with ~~~~)
Yes, it's written in BFBASIC which seems to mean it sometimes copies -1 values around. --(this comment by Rdebath at 13:43, 4 July 2014‎ UTC; please sign your comments with ~~~~)
By the way, if you need very heavy tests to compare your faster compilers / interpreters, you could use mandelbrot-huge.b and mandelbrot-titannic.b. --(this comment by Maxdefolsch at 07:16, 4 July 2014‎ UTC; please sign your comments with ~~~~)
mandelbrot titanic isn't bad, but I'm more looking for programs that can be optimised. For example, my bfi runs titanic in about 30 seconds, in the same period of time it can take prime.b over 3000. But prime.b should be optimisable to subsecond levels (This is the C speed for the same algorithm), titanic would have to be rewritten to make this sort of performance reasonable. The problem seems to be the 8-bit cell size again; to get around this limitation you have to do things like store all your numbers as decimal digits. This increases the complexity of the BF program considerably but worse means that to optimise it well the BF interpreter has to recognise the scheme for extending the cell size. An example; the 1030/prime.b on my table is only runnable by strongly optimising 16bit or larger interpreters EXCEPT for the "hydrogen" interpreter. That interpreter understands "bf2bf -dblcpoy", one particular kind of cell doubling, because of this it can optimise cell doubled code using this method and so gives a quick time for that test. But change the cell doubling scheme and it's lost; there are lots of different schemes. Rdebath (talk) 09:33, 4 July 2014 (UTC)
Oh, so you chose to optimize a very specific pattern that you use yourself, if I understand correctly. OK. --(this comment by Maxdefolsch at 11:02, 4 July 2014‎ UTC; please sign your comments with ~~~~)
Yup, the Brainfuck bitwidth conversions and a slightly optimised variant; which is part of why it's down in my 'also ran' section and not added into my tritium interpreter. Rdebath (talk) 13:43, 4 July 2014 (UTC)

BF Constants Cleanup

Kudos to Rdebath for finishing the cleanup job on Brainfuck constants I started years ago and never got around to finishing. I know how hard it is picking out which ones to keep and which ones are redundant for every single number. --Quintopia (talk) 04:10, 5 August 2015 (UTC)

Thanks, but it wasn't so difficult once I decided to automate it. The rules I'm now using are:
  • Sort into order: Length:desc, Cells:desc, Steps:Desc, ASCII:Asc, Wrap
  • Anything worse that has the same or greater length AND the same or greater number of cells than 2 cells non-wrapping is removed (or one cell nw).
  • For every group of Wrap+Length+Cells is there is another group where both length and cells are better (with same Wrap) delete this group.
  • For every group of Wrap+Length+Cells remaining use the last line (smallest numbers)
Okay maybe the rules aren't so easy;-) I've just tweaked the last two because I found another way of adding far too many lines.
... BTW: as I have them, do you think it would be better if I added the steps to the display too ? Rdebath (talk) 06:31, 5 August 2015 (UTC)
I don't see any reason to display the number of steps, since few people these days are concerned with the running time of BF programs. However, I'm not sure why you are removing things worse than 2 cells non-wrapping? For golfing, it would also be useful to have the shortest program (even if the shortest program requires more than two cells). In particular, I am not sure why 255 doesn't include the shortest nwsoft solution given by taking the shortest program for 256 (the one that calculates it as 2^8) and subtracting one from it. Given that there are plenty of 16-bit implementations out there, such solutions remain of interest. (If your latest update has added these back in, my apologies... I haven't looked at it.) --Quintopia (talk) 02:03, 7 August 2015 (UTC)
Yup, I saw number of steps had been removed before.
There will always be an example of a strictly shorter sequence, the rule about longer than the two-cell is the only filter that goes between the classes.
But a particular one will not be included if another sequence is better (or same) in BOTH cells and length, so the shortest 256-1 is there; it's just that I have counted the number of cells differently.
I've now recounted the over 255 ones too, hmm, most of them miscounted, ho hum. Rdebath (talk) 08:06, 7 August 2015 (UTC)

Thanks for giving the Ruby interpreter!

Thanks for giving the Ruby interpreter on this page: Mariofuck! MihaiEso (talk) 11:30, 28 May 2024 (UTC)