00:00:54 so basically, the United Kingdom now has to sacrifice three or four prime ministers every year in order to be able to delay the Brexit forever and have someone to blame for it 00:01:22 if only they could start a Ministry of Brexit, so that they only had to sacrifice the Brexit minister, rather than the prime minister 00:16:01 but maybe the political dragon specifically demands prime ministers 00:27:28 -!- Phantom__Hoover has quit (Quit: Leaving). 00:35:49 Sgeo: Yes, Microsoft's reverse WINE runs in the kernel and does trickery. 00:36:04 But I imagine you could implement it with a debugger or something. 00:37:28 you mean WSL? 00:39:49 Sgeo: nah, I think if there was a need to emulate windows syscalls by catching the actual syscall, then the linux kernel would just grow an api for user processes to do exactly that 00:40:02 to catch the syscall that is, not to do the whole emulation 00:40:10 how does UML work by the way? 00:44:16 b_jonas: ptrace is already an API to catch syscalls 00:44:33 and UML is a different architecture from x86 or whatever 00:44:42 kmc: yeah, ordinary linux syscalls (all flavors of them), but I don't know if it would catch windows syscalls 00:44:51 so I think the "syscalls" are implemented as userspace calls into the user mode linux kernel 00:44:56 hmm 00:45:05 you can't run ordinary linux binaries in UML, I don't think 00:45:12 oh! 00:45:21 so that's why it didn't work when I just tried to copy an x86 binary? 00:45:22 `file /bin/ls 00:45:23 ​/bin/ls: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=3c233e12c466a83aa9b2094b07dbfaa5bd10eccd, stripped 00:45:33 `uname 00:45:33 Linux 00:45:34 but that would make UML all but useless 00:45:41 b_jonas: no because we have these things called compilers 00:45:43 because nobody would actually compile programs for it 00:45:54 I don't think it's a different architecture though 00:45:54 most of the software people want to run on linux is open source 00:45:58 ``` uname -a 00:45:59 Linux (none) 4.9.82 #6 Sat Apr 7 13:45:01 BST 2018 x86_64 GNU/Linux 00:46:13 well, I might be wrong, it's been forever since I played with uml 00:46:29 maybe it does use ptrace for syscall emulation 00:46:34 i know it uses it for some weird pagetable manipulation stuff 00:46:45 maybe there's some interface other than ptrace 00:46:57 I mean WSL 1, yes. 00:47:48 -!- FreeFull has joined. 00:52:14 shachaf: ham radio : communication :: esoprogramming : programming 00:52:15 ? 00:52:26 no, I don't think so 00:52:38 https://old.reddit.com/r/amateurradio/comments/8lpk45/moon/dzhpm4k/ "the military spent some time and money on this Back In The Old Days, but they stopped doing it because it's dumb as dog shit and horrifically inefficient, which means it is absolutely irresistible for amateur radio operators." 00:52:39 but maybe I'm taking metaphors too seriously 01:18:46 kmc: Update: Now "/lib64/ld-linux-x86-64.so.2 ./out.a" runs successfully but just running the program fails. 01:18:51 huh 01:19:48 hmm, "out.a" 01:20:19 I previously called it "out" but that was either too confusing or not confusing enough. 01:20:36 out.exe ;-) 01:20:59 why not call it a.out 01:21:35 It's not an a.out file. 01:21:43 I guess, if I called it a.out, it would be an a.out file. 01:36:33 I built a debug musl loader and it's more helpful. 01:39:47 I have plenty of ELF files called a.out. 01:40:13 shachaf: does it also crash? 01:43:51 It's already crashed in several different ways. 01:44:17 does the kernel say anything about it? 01:44:35 It says things like "segfault at 8" 01:44:46 fun 01:45:56 So, maybe some symbol didn't get resolved (relocated) properly :) 01:46:08 * int-e is so smart. 01:47:45 I do wonder how hard it would be to transplant the kernel code into user spaces so it could be traced... 02:03:43 -!- doesthiswork has joined. 02:04:56 int-e: you can emulate a whole virtual machine and debug the kernel that way 02:20:26 Oh, my PT_PHDR header was wrong, that's why. 02:21:48 what's that one 02:25:57 It tells the dynamic linker where to find the segment headers. 02:27:00 oh 02:27:04 that sounds pretty important 02:36:17 -!- Sgeo has quit (Ping timeout: 258 seconds). 02:41:43 -!- b_jonas has quit (Quit: leaving). 02:41:50 -!- Sgeo has joined. 02:50:00 I guess? 02:50:25 It seems kind of silly because it's the first segment itself. 02:50:36 Well, some segment header, maybe not the first. 04:20:10 it tells tyhe kernel what to map into memory in the first place 04:20:11 -!- FreeFull has quit. 04:20:48 (which /may/ explain the difference between executing the thing and asking ld.so to load it for you...) 04:21:19 (all AFAIUI, which isn't very far.) 04:22:34 int-e: No, those are the LOAD segments. 04:23:09 Someone posted this method for 2-out-of-3 secret sharing with xor: https://github.com/wybiral/tshare/blob/master/tshare.go 04:23:22 I feel like there should be a simpler way than that. 04:25:28 -!- doesthiswork has quit (Ping timeout: 268 seconds). 04:27:03 Hmm, https://eprint.iacr.org/2008/409.pdf 04:30:13 Maybe not. 04:46:57 What's the simplest possible 2-of-3 sharing scheme? Say for sharing 1 bit. 04:48:45 The natural thing to my mind is interpolating a linear polynomial over GF(2^2). 04:51:26 But it ends up being more complicated than what you get if you mask part of the messages: http://paste.debian.net/1093525/ 04:52:45 Say the bit is b (0 or 1) and we flip a 3-sided coin to a random value r (0 or 1 or 2). We give person p the value (b + r + p) % 3 04:52:58 Wait, that doesn't even let you recover the message, what am I saying. 04:53:22 I was thinking of a different scheme and I obviously simplified it too much. 04:57:55 Ah, of course working modulo 3 works. Distribute r, m+r, 2m+r to the parties. 04:58:32 (m is the secret message to be shared; r is random modulo 3) 04:59:23 Oh, that's better than the scheme I wrote out. 04:59:40 (I mean, the working scheme I wrote in a text file here, not the one I wrote above which was nonsense.) 05:00:43 this is dual to the polynomial interpolation (the message is in the linear term now, not the constant term). 05:14:30 shachaf: http://paste.debian.net/1093526/ ... so this can be thought of as polynomial interpolation over GF(2^2) :-) 05:15:31 Neat. 05:23:14 Hah I'm missing a ' at the end. 05:31:42 i,i but what's x'? 05:35:16 x comes from the representation of GF(2^2). 05:35:56 (polynomials in x over GF(2) modulo x^2+x+1) 05:36:31 That's x, not x' 05:37:54 meh 05:38:16 I see what you did there. I don't approve. I should've written "near the end". 07:13:41 -!- cpressey has joined. 07:25:26 [[Talk:An Odd Rewriting System]] https://esolangs.org/w/index.php?diff=64792&oldid=64778 * Chris Pressey * (+361) I admit defeat 07:31:08 Design for a pathological language, take 3: Fix an enumeration Tn of TMs and an enumeration of sentences Sn in Presburger Arithmetic. Input is . Check if Sn is valid (V) or invalid (I). If it matches 2nd element of pair, simulate Tn, else nop. 07:32:16 There's still a problem: you want the two enumerations to be "different enough" from each other, but how do you guarantee that? 07:33:28 Maybe every 100th n there's an instance of PresA that's easy, and a TM that's useful. 07:35:20 But I guess the bigger question is: if I'm so bad at math, why do I even try to do it? 07:41:25 -!- Frater_EST has joined. 07:41:34 -!- Frater_EST has left. 07:49:02 I'm bad at software too, because to be good at software, you need to be charismatic and live in California. 08:01:38 -!- Lord_of_Life has quit (Ping timeout: 248 seconds). 08:02:55 -!- Lord_of_Life has joined. 08:04:51 -!- rodgort has quit (Quit: Leaving). 08:10:00 -!- rodgort has joined. 08:18:36 [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=64793&oldid=64783 * PCC * (+99) 08:20:36 -!- heroux has quit (Ping timeout: 272 seconds). 08:38:53 -!- b_jonas has joined. 08:39:17 shachaf: for secret sharing, see David Madore's program with which he has unknowingly won the IOCCC: ftp://ftp.madore.org/pub/madore/misc/shsecret.c 08:42:41 [[What Mains Numbers?]] N https://esolangs.org/w/index.php?oldid=64794 * PCC * (+682) what is What Mains Numbers and how to can you program with it? 08:43:06 -!- user24 has joined. 08:46:27 [[Language list]] https://esolangs.org/w/index.php?diff=64795&oldid=64785 * PCC * (+26) /* W */ 09:21:32 -!- arseniiv has joined. 09:29:29 -!- b_jonas has quit (Quit: leaving). 09:34:32 Apparently, version 1.0 of the Haskell Report was published on the first of April 1990 09:34:42 Maybe it's been an elaborate April Fools' joke that got out of hand 09:54:38 -!- shachaf has quit (Ping timeout: 245 seconds). 10:02:40 -!- shachaf has joined. 10:36:37 -!- wob_jonas has joined. 10:37:07 Taneb: it certainly got out of hand, but I think it wasn't a joke 10:40:03 It was an April Fool's Serious 10:40:19 Like GMail 10:40:43 hmm 10:41:03 -!- heroux has joined. 10:57:02 -!- sebbu has quit (Quit: reboot). 11:19:48 -!- user24 has quit (Quit: Leaving). 11:23:54 -!- FreeFull has joined. 11:24:02 -!- oklopol has joined. 11:24:54 -!- FreeFull has quit (Client Quit). 11:25:59 $ ldd out.a statically linked 11:26:08 $ file out.a 11:26:22 out.a: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/l, not stripped 11:27:34 so what does ldd do? collect the shared objects linked in and spout that message if it comes up with nothing? 11:27:39 shachaf: try objdump -x 11:28:08 I'm not sure what ldd does. 11:28:52 -!- FreeFull has joined. 11:29:13 whoa, I didn't know about pldd 11:29:17 "ldd invokes the standard dynamic linker with the LD_TRACE_LOADED_OBJECTS environment variable set to 1." 11:29:23 note that objdump is a cross-utility, it can read the executables of any platform on any platform 11:30:21 ...I also didn't know that ldd was a shell script. 11:30:43 Or that it used that mechanism. 11:30:53 Only platforms it knows about. 11:31:02 huh, didn't ldd use to use a more esoteric interface to communicate with the dynamic linker, where instead of an env-var, it invoked the program with argc being zero? 11:31:11 objdump won't tell me anything I don't already know, since I generated this ELF file myself byte by byte. 11:31:39 oh 11:31:41 I mean, it won't tell me anything about my program. 11:31:58 well, it could tell you something if you don't fully understand how the ELF format works 11:31:59 The idea was to learn what wasn't compliant about it. 11:32:43 Man, using ld.so totally messes up my nice strace output. 11:32:51 like if you made a mistake or something 11:33:02 `` strace -fo tmp/OUT /bin/true 11:33:03 No output. 11:33:40 `url tmp/OUT 11:33:41 https://hack.esolangs.org/tmp/OUT 11:33:59 What! That's a lot nicer than I get on my system. 11:34:25 $ strace /bin/true |& grep 'ld\.so\.nohwcap' | wc -l 11:34:25 5 11:38:32 are they both x86_64? 11:39:19 Mine is. 11:39:33 $ strace /bin/true |& wc -l 11:39:33 60 11:39:54 Anyway I guess I should try calling into libc and then ldd will probably call it dynamic. 11:40:16 But for that I'd need a bunch of things like a PLT and real relocations or something. 11:40:28 My "assembler" has very primitve fixups for local jumps but that's it. 11:40:56 ``` objdump -x /bin/true # x86_64 here too 11:40:56 ​ \ /bin/true: file format elf64-x86-64 \ /bin/true \ architecture: i386:x86-64, flags 0x00000150: \ HAS_SYMS, DYNAMIC, D_PAGED \ start address 0x0000000000001670 \ \ Program Header: \ PHDR off 0x0000000000000040 vaddr 0x0000000000000040 paddr 0x0000000000000040 align 2**3 \ filesz 0x00000000000001f8 memsz 0x00000000000001f8 flags r-x \ INTERP off 0x0000000000000238 vaddr 0x0000000000000238 paddr 0x0000000000000238 align 2**0 \ 11:42:07 Man, you need a hash table and GOT and probably a GNU hash table and all sorts of things. 11:43:03 shachaf: maybe they differ in /proc settings about address randomizatio or something? 11:43:04 Oh, running ld.so directly tells me what's wrong: 11:43:11 uh, sysctl knobs 11:43:24 "error while loading shared libraries: [...]: ELF load command address/offset not properly aligned" 11:43:44 That's a very legitimate complaint, ld.so. 11:44:51 Oh, no, that's what it says on the *statically linked* file. 11:49:26 -!- Sgeo has quit (Read error: Connection reset by peer). 11:49:51 -!- Sgeo has joined. 11:52:48 Oh, what do you know, it's not properly aligned. 11:52:49 shachaf: do you have an LD_LIBRARY_PATH set? I get strace /bin/true 2>&1 | wc -l => 73 and LD_LIBRARY_PATH= strace /bin/true 2>&1 | wc -l => 25... 11:54:40 Oh! 11:54:50 I have an LD_PRELOAD, courtesy of Ubuntu. 11:54:53 oh yeah, and unset LD_PRELOAD too 11:55:06 Because Ubuntu is ridiculous in many ways. 11:55:16 I've probably mentioned how bad this LD_PRELOAD is before. 11:55:18 really, what does Ubuntu deam important enough to LD_PRELOAD? 11:55:25 I missed it. 11:55:30 *deem 11:55:45 I feel left out. I'm running Ubuntu and I don't have a LD_PRELOAD. 11:55:50 So GTK or GNOME decided to switch to drawing decorations in the client and requesting borderless windows from the WM at one point. 11:55:51 int-e: some graphics toolkit thing 11:56:27 This only works particularly well if you're running GNOME. And there's no configuration to disable it. So if you don't run GNOME, they set you up with an LD_PRELOAD that forces GTK to use the old behavior. 11:56:53 Fancy. And awkward. 11:57:09 This is definitely the most reasonable way to do things, rather than, say, patching the source to check an environment variable for using the old behavior. 11:57:26 Or patching the source in any other way. That's not Ubuntu's business. 11:57:58 Anyway I'm stuck with this LD_PRELOAD which constantly makes things fail in annoying ways. 11:58:29 For example Nix programs run with a different library path so they can't find the GTK wrapper and they print an error message whenever I run them. 12:00:56 This must be an Ubuntu 18.04 thing, I'm still running 16.04. What happens if you override LD_PRELOAD? 12:01:24 Maybe I don't actually want to know 12:01:42 Could this be specific to Unity (and hence primarily Ubuntu)? 12:02:26 I don't remember whether Ubuntu uses Unity or GNOME by default? 12:02:39 Unity, I thought. 12:02:42 But I think this is a GTK-wide or GNOME-wide decision. 12:02:51 https://wiki.gnome.org/Initiatives/CSD 12:03:32 cpressey: If I override LD_PRELOAD then most things work slightly better except for GTK programs which work quite a bit worse. 12:03:51 shachaf: I see. 12:04:15 what if you use wrappers for GTK programs that restore the LD_PRELOAD? 12:04:40 That's an option. 12:04:40 Ubuntu uses Gnome3 by default in recent versions 12:04:50 But who can know what programs are GTK programs? 12:05:00 shachaf: Ah so it's a nasty surprise still in the making. 12:05:35 firefox, thunderbird, emacs, inkscape, gucharmap... are my main gtk apps? 12:06:37 (Emacs has several frontends but I'm pretty sure the gtk one is what I'm using. I expect it's still gtk2 and won't be affected for a while yet.) 12:06:47 shachaf: ask the package manager what programs it would uninstall if you decided to uninstall gtk 12:07:16 Also GTK is a mess in many other ways. 12:07:36 It does theming in a particular way, but if you run something called a settings-daemon then it starts doing theming in a completely different way. 12:07:39 They can't even decide what the G stands for 12:07:56 And half of your programs work well with a high-DPI screen one way, and half the other way. 12:08:06 Oh, gimp of course. Forgetting about that one is embarrassing. :) 12:08:40 I tried running a settings-daemon not long ago and it was so terrible that I stopped. 12:08:52 Despite it being the only way to make something work. 12:09:05 The year of Linux on the desktop is now. 12:09:37 But don't worry. As soon as I write this compiler I'll write some good GUI programs with it. 12:09:47 Sure you will. 12:10:19 Any day now! 12:10:41 OK, there's no definite compiler planned. But I did write some UI programs using plain X11+OpenGL. 12:11:48 They're surely way better than some kind of GTK nonsense that prints a bunch of dbind-warnings whenever you run it. 12:13:36 At least it's not kbuilding any sycocas. 12:14:59 -!- ais523 has joined. 12:16:04 General-Purpose and System Instructions> "PEXT Parallel Extract Bits \ Copies bits from the source operand, based on a mask, and packs them into the low-order bits of the destination. Clears all bits in the destination to the left of the most-significant bit copied." 12:16:14 … 12:16:42 did they seriously add select from INTERCAL to the x86 instruction set? 12:16:59 although this version is 32-bit or 64-bit, rather than 16-bit or 32-bit 12:17:33 it's part of the BMI2 instruction set, which my processor apparently supports 12:17:51 * ais523 has an urge to feature-test this during C-INTERCAL's build process and use the asm instruction if supported 12:18:07 ais523: yes. some call it sheep and goats. 12:18:24 ais523: you can use the 32-bit one to emulate the 16-bit one though 12:18:50 yes 12:18:58 ais523: you can probably use a gcc intrinsic and an MSVC instrinsic, with ifdefs, rather than an inline asm 12:19:06 oh wait 12:19:15 that's a compiler 12:19:20 that doesn't apply then 12:19:37 -!- heroux has quit (Read error: Connection reset by peer). 12:19:53 inline asm is more fun 12:19:57 -!- heroux has joined. 12:20:25 Microsoft doesn't support inline assembly on x64. 12:21:47 that doesn't really matter, C-INTERCAL has a really robust autoconf/automake setup and this is the sort of random thing autoconf is designed for 12:22:27 Does autoconf even work on Windows? 12:22:54 autoconf is awful and I hate its ./configure scripts. 12:23:19 Most of what it does isn't useful and hasn't been useful for decades, and it has real and significant costs. 12:23:21 it works about as well as sh and friends do 12:23:36 fwiw, I agree with you about autoconf solving entirely the wrong problem 12:23:48 but for C-INTERCAL in particular this felt like an upside rather than a downside 12:23:51 If they cared, autoconf people could at least make the configure scripts much faster, but I don't imagine they do, or maybe there just are no autoconf people. 12:23:57 it is not the most serious of projects 12:24:15 Sure, for C-INTERCAL you can get an exception. 12:24:33 Though I feel like autoconf isn't even the enjoyable kind of esocomplexity. 12:24:40 It's just nonsense complexity that makes things bad. 12:25:14 ais523: https://docs.microsoft.com/en-us/cpp/intrinsics/x64-amd64-intrinsics-list?view=vs-2019 suggests that _pext_u64 is the intel-standard intrinsic, though I'll have to check that in the intel architecture manual 12:25:40 if that's right, then that will work the same on gcc and msvc, because gcc has headers implementing all that stuff based on gcc builtins 12:25:41 tbh I'm not sure if C-INTERCAL even compiles on Windows 12:25:48 I got it compiling on /DOS/ once but that's different 12:25:48 shachaf: https://github.com/GregorR/autoconf-lean 12:26:12 By a person who used to hang out here frequently once 12:26:50 -!- j-bot has quit (Ping timeout: 244 seconds). 12:27:02 yeah, the intel architecture reference confirms that _pext_u32 and _pext_u64 are the functions corresponding to the PEXT instruction 12:27:21 cpressey: He still turns up once every blue moon. 12:27:23 [[Language list]] https://esolangs.org/w/index.php?diff=64796&oldid=64795 * Hanzlu * (+10) 12:27:34 it's probably still worth to test for this in the autoconf, but it should work 12:28:21 cpressey: and of course umlbox is still actively used 12:28:31 the gcc headers even define these so that they emulate the same operation even if you compile to older instruction sets or non-x86 cpu 12:28:40 and hackbot 12:30:00 ugh, is it correct to write this instruction as asm or as machine code? 12:30:16 I guess it has to be asm so that gcc can participate in register allocation 12:31:04 That's certainly the preferable way, if you want to shun the compiler intrinsic. 12:31:25 It's possible to prefer it, but not mandatory. 12:32:19 yes, but this is INTERCAL, so I have to give at least passing thought to the idea that writing it as raw bytes would mean you didn't have to worry about what syntax the assembler used 12:32:40 wob_jonas: what header files are those even in? 12:32:56 tbh checking for inline asm support in autoconf is probably easier than checking for a specific header file 12:33:30 ais523: look up the header file name and the type of the function at https://docs.microsoft.com/en-us/cpp/intrinsics/x64-amd64-intrinsics-list?view=vs-2019 12:33:43 12:34:05 hmm, neat, seems like both gcc and clang support it 12:34:22 that'd clearly be the better way to do things, which gives a reason to avoid it 12:35:26 ais523: yes, this happens to most of the new x86 instructions; it's only old instructions BSF and BSR that fall through the cracks and have like three different sets of compiler intrinsics that you have to ifdef between, because neither msvc supports the gcc builtins nor backwards 12:36:04 I contributed the parts of http://software.schmorp.de/pkg/libecb.html where it can use the MSVC wrappers for BSF and BSR, which is why I know 12:36:15 is fused multiply-add broken the same way? it's the instruction that's different between Intel and AMD due to a lack of coordination 12:36:57 you should note that even though msvc and gcc both support this, the semantics differ: on msvc, the intrinsic will just emit that instruction even if you're compiling for an older cpu, 12:37:46 for gcc it emits something that gives the same computation result as that instruction would perform, which for such new instructions won't actually call that instruction, unless you're explicitly compiling with a high -march 12:38:09 I don't know, I don't follow how the fused multiply-add and all that neural network nonsense worked, sorry 12:39:55 wob_jonas: it's a silly history 12:40:08 AMD and Intel came out with incompatible implementations of the same instruction 12:40:21 [[Special:Log/newusers]] create * RetroBug * New user account 12:40:27 then both dropped their own version of it and implemetend the other's, so they're still incompatible but in the other direction 12:42:36 [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=64797&oldid=64793 * RetroBug * (+68) 12:43:05 One fun fact about ELF is that the ELF 64 standard says hash table entries are 64 bits, but most implementations use 32 bits. 12:43:22 I think that means the standard is wrong rather than the implementations. 12:47:38 hmm, I suspect this inline asm version may actually be substantially faster than what was there before; performance improvements are great! 12:47:42 now, I wonder how best to do mingles 12:48:09 AVX and friends have mingle instructions, but sadly they only mingle at the byte level 12:49:00 What's mingle? 12:51:11 ais523: that's what the opposite instruction PDEP is for. if you have PEXT, you also have PDEP. 12:51:40 shachaf: alternates bits in two operands to form a combined operand of twice the width 12:51:48 wob_jonas: right, two PDEPs and an OR would do it 12:53:19 ais523: but intercal code often uses mingle followed by an intercal bitwise followed by selecting the odd or even bits, which you can optimize to just a bitwise op 12:53:37 or a bitwise op and a shift 12:54:03 yes, C-INTERCAL does that optimisation already 12:54:47 I mentioned at some point that I think intercal code could use that redundant representation of integers that's base 2 but digits go from -2 to 1, because you can do arithmetic on that representation with the intercal ops faster 12:55:09 no wait 12:55:14 the digits go from -1 to 1 12:55:19 inclusive 12:55:33 yes but it's way harder to store in memory 12:56:25 I feel like there are very limited uses for inline assembly nowadays. 12:56:33 Almost everything is covered by either top-level assembly or intrinsics. 12:56:53 So Microsoft's decision is perhaps reasonable. 12:57:03 What are uses for inline assembly? 12:58:13 shachaf: out-optimising the compiler is one thing 12:58:25 accessing some specific instructions 12:58:49 ais523: no it's not. you just store it as two integers, and they represent their difference 12:58:57 hmm wait 12:58:59 I'm confusing this up 12:58:59 like rdrand or rdtsc or... 12:59:00 this morning, I was curious about the following problem: suppose you have a function that generates a sequence of ints and can't be parallelised 12:59:14 I'll have to clear this up at some point, but now I don't know how they work 12:59:15 what's the fastest way to store the generated ints into memory, assuming that there are too many to fit into the L2 cache? 12:59:44 But at what point do you need to out-optimize the compiler within a function? 13:00:20 I think in most such cases you end up wanting to write the whole function in assembly. 13:00:20 gcc's and clang's approaches were utterly different, but very comparaible in speed; I tried a few other things on my own, and eventually found one that was slightly but consistently faster 13:00:26 Specific instructions sounds like what intrinsics are for. 13:00:48 @time 13:00:51 Local time for shachaf is Tue Jul 30 06:00:49 2019 13:00:52 Time to go to sleep. 13:01:05 shachaf: well, in my case, the loop was still written in C 13:03:16 funnily enough, I decided to use a repeated rotate-left as a standin for the "function that generates a sequence of ints and can't be parallelised" (yes, I know you can parallelise that in practice) 13:03:46 and the compiler didn't recognise it, so I ended up writing the "add %0, %0\n\tadc $0, %0" manually 13:08:05 -!- sebbu has joined. 13:08:45 actually my experience is that even modern compilers are fairly bad at micro-optimisation, they're just good at knowing about more long-range optimisations that humans don't often think of 13:10:00 [[ACL]] https://esolangs.org/w/index.php?diff=64798&oldid=64789 * Hanzlu * (+204) 13:12:00 [[ACL]] https://esolangs.org/w/index.php?diff=64799&oldid=64798 * Hanzlu * (-2) 13:15:55 [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64800&oldid=64794 * A * (+166) 2019 esolang 13:17:26 [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64801&oldid=64800 * A * (+18) No 13:20:47 OK, C-INTERCAL repo updated with the use of inline asm for PEXT 13:21:33 that was quick 13:21:50 does it also do PDEP for mingle? 13:22:20 not yet 13:22:27 our exiting mingle is fairly optimised as it is 13:22:59 ok 13:23:26 [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64802&oldid=64801 * A * (+159) 13:23:29 that said, it's still a /lot/ of instructions 13:24:28 hmm, I wonder how you ask gcc to pick an arbitrary temporary for you 13:24:37 ais523: you know that Warren's "Hacker Delight" talks about the mingling (shuffling) and selecting, right? I don't recall what it says, but it definitely talks about them. 13:24:42 [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64803&oldid=64802 * A * (-22) 13:24:43 maybe just say "register int temp;" and assign to it without reading it 13:25:16 ais523: I don't think you even need "register" if it's arbitrary 13:25:29 oh, duh, you just do it one instruction at a time 13:25:55 wob_jonas: well, it has to actually /be/ a register, although gcc's =r hint is sufficient to teach it about that 13:26:14 -!- j-bot has joined. 13:26:23 make the asm clobber it? 13:26:29 rather than write into it 13:26:43 as in, fourth argument or something 13:26:45 clobbers have to be fixed in the source code, though 13:26:59 [[What Mains Numbers?]] https://esolangs.org/w/index.php?diff=64804&oldid=64803 * A * (+133) 13:27:01 hmm 13:27:03 I think the correct thing to do is to just make the temporary visible to gcc explicitly so that it can do SSA and friends on it 13:27:06 and spills, and the like 13:28:36 [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64805&oldid=64804 * A * (-3) 13:30:31 [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64806&oldid=64805 * A * (+19) 13:30:32 ais: about compilers being bad about micro-optimization, https://m.youtube.com/watch?v=bSkpMdDe4g4 13:31:49 I found some of those impressive, sums compressed to formulas, multiplication turned differently into combinations of bit shifts etc. 13:32:07 (probably very basic stuff, I'm no expert) 13:32:08 -!- wob_jonas has quit (Ping timeout: 245 seconds). 13:33:54 -!- wob_jonas has joined. 13:34:38 [[What Mains Numbers?]] https://esolangs.org/w/index.php?diff=64807&oldid=64806 * A * (+22) 13:35:47 ais523: you should mark it "volatile volatile" 13:36:38 but it isn't volatile 13:37:20 oklopol: AMD's optimisation guide has a list of constants for which it's worth using alternative code to multiply by them 13:37:36 the smallest nonnegative integer for which IMUL is the fastest way to multiply by that integer is 22 13:37:44 for every smaller integer, there's some trick 13:37:58 (disappointingly, they didn't even bother to list the tricks for multiplying by 0 or 1) 13:38:27 ais: yes that sort of stuff, optimizing mul by constant to shifts, and also vice versa if you try to be clever :P 13:39:00 And differently based on what you're compiling for 13:41:05 optimising to shifts is boring, the /real/ trick on x86 is to use the AGU to do multiplications by unexpected numbers 13:41:47 e.g. for multiply by 9, AMD suggests "lea reg1, [reg1 + reg1 * 8]" 13:41:48 This is also shown on the vid iiuc 13:42:04 Yes that's automatically done by optimizers 13:42:05 btw, LEA is still a total hack :-) 13:42:15 Yes 13:42:20 I'd expect any compiler developer who cares about optimization to have read this document already 13:42:22 [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64808&oldid=64807 * A * (+228) 13:43:56 [[What Mains Numbers?]] https://esolangs.org/w/index.php?diff=64809&oldid=64808 * A * (-34) An infinite loop in a language that only provides finite loops! 13:44:52 -!- wob_jonas has quit (Ping timeout: 272 seconds). 13:46:26 [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64810&oldid=64809 * A * (+43) /* Infinite loop */ 13:50:12 [[What Mains Numbers?]] https://esolangs.org/w/index.php?diff=64811&oldid=64810 * A * (+61) 13:51:40 OK, mingles are now also hardware-accelerated 13:54:08 [[What Mains Numbers?]] https://esolangs.org/w/index.php?diff=64812&oldid=64811 * A * (-27) /* What Mains Numbers? */ 13:55:47 [[What Mains Numbers?]] https://esolangs.org/w/index.php?diff=64813&oldid=64812 * Ais523 * (+124) Mixed undo revisions 64809, 64810 by [[Special:Contributions/A|A]] ([[User talk:A|talk]]): not an infinite loop, it just allocates so much memory that it'll probably thrash nearly-indefinitely 13:56:05 A should stop jumping to assumptions :-( 13:56:39 btw, I had a great idea about how pointers should work 13:57:04 instead of pointing to the start or end of an object, they should point to the middle (this means adding an extra bit so you can point into the middle of a byte) 13:57:33 this assumes that all your allocations are power-of-2-sized and aligned, otherwise there's no real gaini 13:57:48 -!- wob_jonas has joined. 13:57:50 nice 13:58:06 exactly in the middle of objects? hmm 13:58:17 but if you have that, then the middle-pointer uniquely specifies both the memory you're accessing and the width of it, which should make things like hardware bounds checking efficiently possible 13:58:55 How does it store the width? 13:58:57 what? 13:59:07 on x86_64 you could make up for the extra bit at the end by dropping bit 62, it's never going to get used anyway 13:59:13 Taneb: count the number of trailing zeroes 13:59:39 objects are power-of-2-sized and aligned, thus the middle is aligned with respect to half the object's size but misaligned with respect to the object's full size 14:00:11 thus, you can use the alignment to determine the size, without ever having a pointer that's randomly more aligned than it should be 14:00:43 that won't give exact bounds checks though, only bounds checks rounded up to a power of two or something close 14:00:57 well, you only allocate objects in power-of-2 sizes 14:01:05 (there are good reasons for a malloc to do that anyway) 14:01:13 sure 14:01:14 the main issue is structs, I think 14:01:18 binary buddy block allocator 14:01:44 you can also do a fibonacci version of this alignment scheme, just to screw with people 14:01:57 Or you can add 3*2^k into the mix for fun. 14:02:06 the algorithm of "just allocate in the first available aligned address of the appropriate size" is great, when you use power-of-two sizes only it actually works 14:02:12 (That seems easier than fibonacci.) 14:02:24 (Also Fibonacci seems awful for alignment.) 14:02:25 allocate only objects of fibonacci size, at addresses whose address in zeckendorf end with as many zeroes 14:02:34 yes, the fibonacci version is definitely in the screwing-with-people realm 14:02:47 int-e: only on current cpus, which use 64-byte 64-aligned cache lines 14:03:10 I have a suspicion that 64-byte will be the correct size for a cache line for the foreseeable future 14:03:17 When will we move to 128? Also, RAM rows enter the picture as well at some point. 14:03:59 just like my tests indicate that 16 bytes is the correct size for a bulk write to memory (if you're getting the data as individual ints rather than a bulk read) 14:04:33 int-e: cache lines are weird, ideally you'd want them to be /smaller/, the only reason to have them that large is to reduce the amount of bookkeeping you have to do 14:05:02 a larger cache line would mean that you had so many of the things that you could afford to often waste data space in the L1 cache, but were very tight on bookkeeping cache 14:05:10 which seems implausible with modern processor designs 14:05:39 I guess maybe L2 would benefit from longer cache lines? 14:05:40 int-e: in a hypothetical cpu that has 55 and 89 byte cache lines, aligned to fibonacci round addresses 14:05:46 but there are obvious reasons to want them the same size as L1 14:05:54 -!- oklopol has quit (Ping timeout: 258 seconds). 14:05:56 wob_jonas: I'm not going there. 14:07:36 I wonder what the performance of a malloc that, for large objects, just maps a ridiculous amount of memory as MAP_NORESERVE and relies on the kernel to do the actual allocations when page faults happen 14:08:09 (the page faults were going to happen anyway, so there seems to be no particular reason to do anything at other times) 14:08:44 the huge advantage of this is that realloc becomes a nop, which helps make your write loops tighter 14:09:06 ais523: yeah, but that doesn't work too well when you allocate a lot of small objects, which is a common case 14:09:14 also you don't have infinite address space 14:09:18 you need a different algorithm for small objects, yes 14:09:25 so that they don't need a separate page 14:09:32 but you do pretty much have infinite address space 14:09:33 right 14:09:46 64 bits is a /lot/ 14:09:55 you don't have 64 bits, 14:09:57 but even without that 14:09:58 you can allocate 4 GiB for every object and still have 32 bits left 14:10:08 the kernel has to do bookkeeping for what you allocate 14:10:22 no, you don't have 64 bits of virtual address space 14:10:28 yes, sadly 14:10:35 it's, what, 48 bits on modern processors? 14:10:42 that's just what the architecture allows us to expand the address space without breaking binary compatibility 14:10:54 wait, 47 14:11:10 because half the virtual address space is reserved for kernel-internal use 14:11:22 (and that too only if people don't start using high bits for tag bits when they have perfectly usable low bits instead, like they did in the 32-bit era and ended up with a prolog interpreter that couldn't use more than 256 megabytes of memory) 14:11:46 even so, that's still 32767 self-reallocing objects, there are plenty of programs that are unlikely to use anywhere near that many 14:12:10 wob_jonas: they can't, x86_64 actually intentionally crashes if it sees a high bit used as a tag bit 14:12:13 I don't know how many bits we have now, they keep changing that every decade or so, I'm not following 14:12:22 ais523: only if you don't mask it 14:12:27 same as with the 32-bit things 14:12:39 if you explicitly mask it off before using it as an address, it will work 14:12:58 right, because the processor can't see how the value was derived 14:13:11 but low bits is still easier, because if you know all the low bits, you can usually remove them by just using the right offset 14:13:35 most people do get this right though, so it's not much of a worry 14:13:54 that one prolog interpreter was more just an unfortunate exception 14:14:03 anyway, one thing that's really annoying is that malloc() is not async-signal-safe 14:14:20 the first-power-of-2 technique can be implemented lock-free, I think 14:14:37 in which case it probably should be, so that people can allocate memory in their signal handlers without deadlocks 14:15:14 yeah, you're right, 48 bits of virtual address space now, I think 14:17:12 ais523: really? do you mean even without a small performance penalty for the common case of sane programs that don't try to alloacte from a signal handler? 14:17:50 if you really want to allocate from a signal handler, then use a custom more expensive allocator for those parts of the code that may run from a signal handler 14:18:06 wob_jonas: well you need to use a lock or atomic /somewhere/ 14:18:29 but I think it's usually better to just not do anything fancy from a signal handler 14:18:29 I think there's debate about which is faster in the common, non-contended case, but I'm guessing they're much the same 14:18:58 and when there's no contention the algorithm runs quickly (unless there's /so much/ contention that the processor starts predicting the branch as taken, which is likely to be the least of your issues) 14:20:41 -!- ARCUN has joined. 14:21:35 Anyone know any good FPGAs? I need one for my esoteric computer. 14:22:50 in my experience, FPGA toolchains are really terrible 14:22:58 `? rnoodle 14:22:59 rnoodle? ¯\(°​_o)/¯ 14:23:00 `? rnooodle 14:23:01 rnoooodle? ¯\(°​_o)/¯ 14:23:04 `? rnoooodle 14:23:05 rnoooodle? ¯\(°​_o)/¯ 14:23:07 `? rnooooodle 14:23:08 rnooooodle? ¯\(°​_o)/¯ 14:23:09 `? rnoooooodle 14:23:10 rnoooooodle? ¯\(°​_o)/¯ 14:23:15 `? rnooooooodle 14:23:15 as for the FPGAs themselves, for the majority of tasks, either most FPGAs will be good enough or affordable FPGAs won't b e good enough 14:23:16 rnooooooodle? ¯\(°​_o)/¯ 14:23:17 -!- ARCUN has quit (Remote host closed the connection). 14:23:23 `? rnodle 14:23:24 rnodle? ¯\(°​_o)/¯ 14:23:27 so the main difficulty is finding a way to wire them up to your computer 14:24:18 why? don't those FPGAs have IO devices built in? 14:25:24 -!- ARCUN has joined. 14:25:50 You need to do it in a field. 14:25:58 They're field programmable, you see. 14:26:24 If you happen to be in a forest, tough luck. 14:26:30 I was thinking of using an Altera cyclone ii mini to use, but I heard that the Spartan series is good too 14:28:47 One of the main problems is, how would I get it to display items on the screen? VHDL really doesn't make this any easier, as it's not be most consice of languages 14:29:52 -!- ARCUN has quit (Remote host closed the connection). 14:31:52 https://github.com/stacksmith/fpgasm 14:32:26 hmm, so in a quick test, Linux was quite happy to allocate me 16 GiB of address space in one large mapping 14:32:58 even though I don't have that much memory in physical or swap space or both combined 14:33:12 well sure, many computers these days have 16 GB physical memory 14:33:28 and I could read/write random addresses in it without any obvious performance issues 14:34:36 but won't the kernel still need to keep about 1/1000 the size of that virtual memory for administration? 14:34:42 this leads me to suspect that the most efficient way to deal with memory, if you don't care about getting segfaults for wild accesses, is to only ask the kernel for memory once in the lifetime of the program, and use writes to memory to allocate it and madvise to free it 14:34:58 wob_jonas: page caches have multiple levels nowadays 14:35:02 unless you use large pages that is, but large pages would defeat the problem 14:35:15 ais523: sure, but ... I don't know how that works in the kernel 14:35:17 maybe 14:35:30 also I don't see how the same problem doesn't happen even if you allocate a bit at a time and use brk and mmap and whatever to request more as you need it 14:35:41 err, "a small amount at a time", not a literal bit :-) 14:36:19 sorry, I was trying to argue against the method you mentioned above, of allocating 4G for every large object 14:36:25 does MADV_REMOVE work with anonymous mappings, I wonder? 14:36:36 wob_jonas: ah, I see 14:38:36 hmm, I wonder if any memset implementations use madvise to zero memory? I'm guessing not, it'd be insane 14:38:45 but memset is the sort of function where insane optimisations can make sense 14:38:57 dunno 14:39:10 (the idea would be to swap out the page backing the memory you're trying to zero for a freshly zeroed page) 14:39:30 I don't know whether Linux has a background memory zeroing daemon (or equivalent); I know Windows does 14:39:37 I don't think it would help much, in the long run, as long as you're using memset for memory you want to use later, because the kernel has to zero the page eventually 14:40:06 Windows has a supply of pre-zeroed physical memory pages that it hands out to applications, and zeroes pages in the background after they're unmapped 14:40:37 yeah. I think linux has something like that too 14:42:00 kswapd, apparently 14:42:18 it doesn't run constantly, only when the number of zeroed pages is low 14:42:27 if it gets very low the kernel foregrounds the page-zeroing task so that it never runs out 14:44:37 -!- ais523 has quit (Remote host closed the connection). 14:44:50 -!- ais523 has joined. 14:49:27 Should I learn LLVM assembly or should I not bother 14:50:49 -!- oklopol has joined. 14:51:16 -!- wob_jonas has quit (Ping timeout: 246 seconds). 14:51:36 cpressey: that's up to you 14:53:00 I'd say, only if you want to use it for something 14:53:14 or if you're interested in SSA-based languages in general 14:53:33 it's really a multiple-level language, though, it can express a lot of different levels of abstraction and is designed to compile into lower abstraction levels of itself 14:53:41 (this is a common property for compiler intermediate representations) 14:53:50 so really, "learning LLVM" is about learning a specific subset of it 14:55:08 whatever you need for whatever it is you're doing 14:55:49 I have two compiler projects that became dead ends because I tried to generate C and it just got frustrating and boring and I abandoned them. 14:56:40 I think generating C is generally easier than generating LLVM, also less platform-specific 14:56:49 [[ACL]] https://esolangs.org/w/index.php?diff=64814&oldid=64799 * Hanzlu * (+1117) 14:56:55 (LLVM is slightly platform-specific, enough so that you can't really generate "portable LLVM") 14:57:14 OK, then "no" I guess 14:57:15 perhaps WebAssembly would be an interesting target to use instead, that's fairly regular as ASMs go 14:57:40 I'll just leave them as dead ends 14:57:53 Bye. 14:57:55 -!- cpressey has quit (Quit: WeeChat 1.4). 15:02:17 -!- ais523 has quit (Quit: quit). 15:16:06 -!- doesthiswork has joined. 15:20:30 -!- wob_jonas has joined. 15:21:23 I was in London for the weekend. It seems that the stores sell milk in both one liter size and a size slightly larger than one liter, the latter is apparently somewhat round in some non-metric measurement unit. 15:22:21 Also they sell half liter and two liter bottles. I still find that strange. Half liter milk bags used to exist here, but only a very long time ago, and I've only ever seen ones larger than one liter abroad. 15:28:15 I sometimes buy the half-litre bottles if I'm thirsty when I'm out and about 15:38:26 -!- lldd_ has joined. 15:42:15 drinking milk as a beverage is weird to me 15:43:28 It's weird to a lot of people 15:43:44 kmc: is that because you live in a place where you can't easily buy fresh milk, only 15:44:20 UHT milk? because fresh milk tastes much better, but I know it's not available everywhere 15:44:41 But like, it's cheaper and healthier (here at least) than soft drinks 15:46:17 ...now I'm thirsty 15:48:10 -!- FreeFull has quit. 15:51:10 -!- wob_jonas has quit (Remote host closed the connection). 16:08:26 we do 1l and 1.5l here 16:09:42 We have 1.75 16:12:38 UHT milk isn't common in the USA 16:12:53 we mostly have regular pasteurized milk 16:12:57 which needs to be refridgerated 16:13:22 I bought some lemonade the other day, didn't notice it was unpasteurized... within less than a week the bottle had puffed up to almost a round cylinder 16:13:39 I started unscrewing it in the sink and the cap came off with a bang 16:17:42 probably the "slightly larger than one liter" was 2 imperial pints? 16:18:00 it's great how the UK's non-metric unit isn't even the same as the US's non-metric unit of the same name 16:18:18 2 imperial pints is a bit more than 1L but 2 US pints is a bit less than 1L 16:18:32 -!- xkapastel has joined. 16:20:11 Someone once taught me a rhyme, "A litre of water is a pint and three quarter" 16:20:44 I didn't realise the US pint was different 16:21:08 that rhyme doesn't even rhyme very well 16:21:22 a litre of wuarter 16:21:26 It rhymes almost perfectly to me 16:21:43 You must talk weirdly 16:21:52 (or, like, have a rhotic accent) 16:25:55 -!- Sgeo_ has joined. 16:26:06 I like phonology 16:29:13 -!- Sgeo has quit (Ping timeout: 245 seconds). 17:06:36 I live in the US and I have no idea what a pint is 17:13:01 -!- b_jonas has joined. 17:33:33 kmc: yes, probably 17:33:48 I didn't much pay attention right there, and I don't have the bottles or photos of them anymore 17:33:56 mm 17:34:02 wb_jonas 17:37:20 still no IOCCC source codes 17:37:55 :( 18:16:12 so the Giant says that the end of the sixth OotS book is in sight. and there will only be seven books. we must be two thirds ratio into the story by now. 18:16:38 I presume the last book will be the thickest, because that's how these series usually go, but still. 18:18:58 [[ACL]] https://esolangs.org/w/index.php?diff=64815&oldid=64814 * Hanzlu * (-178) 18:21:17 can you imagine living in a time when everyone knows OotS as an epic that is already complete, and we tell children about how we had to wait ten days (uphill both ways) for the next strip to appear, over and over again for each strip? 18:25:00 although I guess we can already tell them about when Harry Potter wasn't yet complete 18:27:51 -!- ARCUN has joined. 18:28:47 Ubuntu came out with the 19.04 version 18:28:58 I almost installed 18.04 18:29:10 -!- ARCUN has left. 18:30:09 and #esoteric is logged way back so we can even prove it 18:58:49 -!- lldd_ has quit (Quit: Leaving). 19:09:28 although I guess we can already tell them about when Harry Potter wasn't yet complete => was it too published strip by strip? 19:15:13 arseniiv: no, but we had to wait for the last three books 19:31:45 it would be quite interesting if Harry was originally a comic series 19:34:15 dunno. that would make the books more expensive, I think, so it would get to fewer people 19:35:10 the way they are, with books, I can have the complete story in seven books. in comics, I could only have slices. 19:35:52 b_jonas: agree 19:36:01 unfortunately 19:36:25 hm, there are some prose/comic hybrids out there, maybe it’s a good format 19:36:39 illustrated books, yes 19:36:43 they can be good 19:37:02 I have Matilda by Roald Dahl on my bookshelf, but that one is short 19:37:51 oh, I didn’t know that’s illustrated originally (I only have seen a film) 19:38:10 I also have some of the Kästner books 19:38:32 also, is it translated, I mean Matilda? 19:38:52 there is a translation, and I've read it, but in this case, I have the original English version of Matilda on my shelf 19:39:01 the Kästner books I only have in translation 19:39:13 thanks 19:39:32 Matilda is one of the books I've met when I was very young, but only got the original more recently 19:41:07 BTW I don’t like very much how it ends, “and she didn’t need to use her telekinesis almost ever”, is it a tad boring 19:41:27 no no, it ends by Matilda _losing_ her telekinesis 19:41:32 oh 19:41:36 there's some speculation too on why 19:41:40 but that's not even the important point 19:41:43 the movie guys lied to me 19:42:13 the more important is that it ends by Matilda living happily ever after with her teacher Ms Honey in the house that she inherited, instead of with the parents who don't care much about her 19:42:46 I understand what is it she didn’t have is a loving family, yeah, I agree it’s greater, but still 19:44:42 it’s like there can only be one thing more important that all the others, and it doesn’t ring too true, even when I was a kid and saw the movie version the first time 19:45:07 anyway the story is good 19:45:51 and I can also say if Matilda is okay with no superpowers, then so am I :D 19:46:18 why wouldn't she be okay? she didn't ask for them anyway, and she was never dependent on them 19:48:28 right 19:50:41 @tell ais523 this new smb3 tas is even super cooler than last time thx 19:50:41 Consider it noted. 19:57:24 -!- xkapastel has quit (Quit: Connection closed for inactivity). 20:03:13 -!- Lord_of_Life_ has joined. 20:03:36 -!- Lord_of_Life has quit (Ping timeout: 272 seconds). 20:05:56 -!- Lord_of_Life_ has changed nick to Lord_of_Life. 21:51:46 -!- xkapastel has joined. 21:57:38 `? union 21:57:39 An union is the opposite of an ion. 21:57:42 `q 21:57:43 1288) (btw, "q = 1-p" should be the standard definition of q, IMO) 22:51:47 -!- b_jonas has quit (Quit: leaving). 23:09:38 -!- MDude has quit (Ping timeout: 245 seconds). 23:24:24 [[ACL]] https://esolangs.org/w/index.php?diff=64816&oldid=64815 * Hanzlu * (+439) 23:30:52 [[ACL]] https://esolangs.org/w/index.php?diff=64817&oldid=64816 * Hanzlu * (+479) 23:31:01 -!- MDude has joined. 23:33:56 [[ACL]] https://esolangs.org/w/index.php?diff=64818&oldid=64817 * Hanzlu * (+32) 23:53:55 [[ACL]] https://esolangs.org/w/index.php?diff=64819&oldid=64818 * Hanzlu * (+118)