00:00:04 where does the data go, then? 00:00:20 Sorry, I omitted that part, should've put some ... in there. 00:00:33 Both of the DOCTYPEs are XHTML 1.0 Transitional, by the way. 00:00:43 it's more fun if inner document is inside the input 00:03:25 hmm… if you're going to nest HTML like that there should logically be a to show where the inner document ends 00:03:40 also, if this is /X/HTML, the browser should refuse to display iit 00:05:26 -!- tromp_ has quit (Remote host closed the connection). 00:06:38 -!- zzo38 has joined. 00:24:55 ais523: you know anything about VSDGs? 00:27:03 I don't know what the acronym stands for, which leads me to suspect not 00:27:41 Wikipedia doesn't know either 00:27:46 Value State Dependence Graph. https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-705.pdf 00:27:53 Fun little paper/ 00:27:59 good concept 00:33:42 My opinion of PostScript is that it isn't a very good protocol or document format, but it is a OK programming language. 00:39:02 ais523: was asking because I thought it'd be something you'd be interested in. (I'm considering trying something with it myself) 00:41:04 moony: I internally think of programming at a very high level, so things like the difference between VSDG and SSA-PDG are hard for me to notice 00:41:30 I was considering using a subset of SMT2 (the input language to Z3 and friends) as a compiler IR, on the basis that it'd make proving the correctness of optimisations easy 00:41:54 neat 00:42:21 and that's VSDGish in the sense that although the source language can express unnecessary ordering constraints, the SMT solver can't see them once the program has been loaded 00:42:35 so I didn't really notice they existed 00:43:31 (another example of this sort of mental block: after working with concurrent programming for several years, I generally forget that in most processors and languages, the order in which you assign to two variables can be relevant even if you don't explicitly order them) 00:43:41 -!- tromp has joined. 00:44:28 ooh right: the compiler I wrote for work uses a VSDG-based internal representation, I only just realised 00:45:20 only its target platform is hardware, which is basically VSDG in its own right, the only difference is that you need to give explicit timing rules for when the wavefront moves 00:45:44 (but you're trying to parallelise as much as possible and not sequentialise unless you really have to, otherwise you might as well just use a CPU) 00:46:57 the other difference is that we had control flow combinators that could introduce loops in the graph 00:48:39 -!- tromp has quit (Ping timeout: 264 seconds). 01:03:11 -!- tromp has joined. 01:05:12 mm 01:06:25 neat 01:07:47 -!- tromp has quit (Ping timeout: 252 seconds). 01:16:29 -!- iovoid has joined. 01:20:50 -!- Sgeo_ has joined. 01:23:55 the great thing about the hardware compiler is that the sort of optimisation that would normally be considered a "peephole optimisation" is, ironically, higher-level rather than lower-level than working closer to the source code 01:24:01 -!- Sgeo has quit (Ping timeout: 268 seconds). 01:24:27 because it's a finer-grained representation than typical source is, and there are no ordering constraints unless they're needed for correctness 01:25:26 ugh, this VSDG thesis uses «printf("%i",(x++)+(x++));» as an example of code with unspecified behaviour, but the behaviour is actually undefined 01:25:36 in particular there's no guarantee that it prints either 0 or 1, like the author expected 01:26:47 Yeah, you're mutating the same l-value twice with no sequence point. 01:26:50 «printf("%i %i", x++, x++);» would have been a better example (I believe this is required to print either "0 1" or "1 0" but no requirement on which) 01:27:01 Yep. 01:27:02 `! c printf("%i %i", x++, x++); 01:27:04 Does not compile. 01:27:14 I think it might be implementation-defined? 01:27:14 did we fix that thing yet? 01:27:38 Though actually, could just be unspecified. 01:27:54 `` echo 'int main(void) { printf("%i %i", x++, x++); }' | gcc -Wall -x c -o /tmp/a.out /dev/stdin; /tmp/a.out 01:27:55 ​[01m[K/dev/stdin:[m[K In function ‘[01m[Kmain[m[K’: \ [01m[K/dev/stdin:1:18:[m[K [01;35m[Kwarning: [m[Kimplicit declaration of function ‘[01m[Kprintf[m[K’ [[01;35m[K-Wimplicit-function-declaration[m[K] \ [01m[K/dev/stdin:1:18:[m[K [01;35m[Kwarning: [m[Kincompatible implicit declaration of built-in function ‘[01m[Kprintf[m[K’ \ [01m[K/dev/stdin:1:18:[m[K [01;36m[Knote: [m[Kinclude ‘[01m[K `` echo 'int main(void) { printf("%i %i", x++, x++); }' | gcc -x c -o /tmp/a.out /dev/stdin; /tmp/a.out 01:28:11 ​[01m[K/dev/stdin:[m[K In function ‘[01m[Kmain[m[K’: \ [01m[K/dev/stdin:1:18:[m[K [01;35m[Kwarning: [m[Kimplicit declaration of function ‘[01m[Kprintf[m[K’ [[01;35m[K-Wimplicit-function-declaration[m[K] \ [01m[K/dev/stdin:1:18:[m[K [01;35m[Kwarning: [m[Kincompatible implicit declaration of built-in function ‘[01m[Kprintf[m[K’ \ [01m[K/dev/stdin:1:18:[m[K [01;36m[Knote: [m[Kinclude ‘[01m[K ais523: I wrote http://slbkbs.org/tmp/2019-08-21-test-cases.txt ; I don't remember whether we talked about all the things in that document. 01:28:37 `` echo -e '#include \nint main(void) { printf("%i %i", x++, x++); }' | gcc -x c -Wall -o /tmp/a.out /dev/stdin; /tmp/a.out 01:28:38 ​[01m[K/dev/stdin:[m[K In function ‘[01m[Kmain[m[K’: \ [01m[K/dev/stdin:2:34:[m[K [01;31m[Kerror: [m[K‘[01m[Kx[m[K’ undeclared (first use in this function) \ [01m[K/dev/stdin:2:34:[m[K [01;36m[Knote: [m[Keach undeclared identifier is reported only once for each function it appears in \ /hackenv/bin/`: line 5: /tmp/a.out: No such file or directory 01:28:39 ``echo 'int printf(); int main(void) { printf("%i %i", x++, x++); }' | gcc -x c -o /tmp/a.out /dev/stdin; /tmp/a.out 01:28:40 ​/srv/hackeso-code/multibot_cmds/lib/limits: line 5: exec: `echo: not found 01:28:51 `` echo -e '#include \nint main(void) { int x = 0; printf("%i %i", x++, x++); }' | gcc -x c -Wall -o /tmp/a.out /dev/stdin; /tmp/a.out 01:28:52 ​[01m[K/dev/stdin:[m[K In function ‘[01m[Kmain[m[K’: \ [01m[K/dev/stdin:2:51:[m[K [01;35m[Kwarning: [m[Koperation on ‘[01m[Kx[m[K’ may be undefined [[01;35m[K-Wsequence-point[m[K] \ 1 0 01:29:01 ooh, gcc things it is UB 01:29:06 *thinks 01:29:21 I'm not sure it's right, though 01:29:32 Huh, is the ',' in an argument list not a sequence point? 01:29:36 I thought it was 01:29:36 I would've assumed it was. 01:30:45 apparently the function call is a sequence point, but there isn't a sequence point between the arguments 01:31:05 `` echo -e '#include \nint main(void) { int x = 0; printf("%i %i", x++, x++); }' | gcc -x c -Wall -fno-diagnostics-color -o /tmp/a.out /dev/stdin; /tmp/a.out 01:31:07 ​/dev/stdin: In function ‘main’: \ /dev/stdin:2:51: warning: operation on ‘x’ may be undefined [-Wsequence-point] \ 1 0 01:31:22 `` echo -e '#include \nint main(void) { int x = 0; printf("%i", x++) + printf("%i", x++); }' | gcc -x c -Wall -fno-diagnostics-color -o /tmp/a.out /dev/stdin; /tmp/a.out 01:31:23 ​/dev/stdin: In function ‘main’: \ /dev/stdin:2:63: warning: operation on ‘x’ may be undefined [-Wsequence-point] \ /dev/stdin:2:47: warning: value computed is not used [-Wunused-value] \ 01 01:33:56 Aaah 01:36:33 I'm reading the standard to try to determine whether that's UB or not 01:37:00 each x++ has to run either before or after the other printf, but there's no such requirement on running before or after the /argument calculation to/ the other printf 01:37:19 (n1570.pdf 6.5.2.2.10) 01:38:46 that said, I can't find the part of the standard that makes "modify twice between sequence points" UB 01:38:54 I know it exists, or used to, I just can't find it 01:42:18 6.5p2 "If a side effect on a scalar object is unsequenced relative to either a different side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined." 01:42:22 wow, it is indeed UB 01:42:40 the language has been very tightened up since the earlier C standards 01:42:55 `` echo -e '#include \nint main(void) { int x = 0; printf("%i", x++) + printf("%i", x++); return 0; }' | gcc --std=c89 -x c -Wall -fno-diagnostics-color -o /tmp/a.out /dev/stdin; /tmp/a.out 01:42:56 ​/dev/stdin: In function ‘main’: \ /dev/stdin:2:63: warning: operation on ‘x’ may be undefined [-Wsequence-point] \ /dev/stdin:2:47: warning: value computed is not used [-Wunused-value] \ 01 01:43:45 for contrast, the language from C99: "Between the previous and next sequence point an object shall have its stored value modified at most once by the evaluation of an expression." 01:44:40 and 6.5.2.2p10: "The order of evaluation of the function designator, the actual arguments, and subexpressions within the actual arguments is unspecified, but there is a sequence point before the actual call." 01:45:34 "previous and next" is pretty unclear in this case! 01:58:34 -!- tromp has joined. 01:59:57 Is having a pregenerated memory layout that contains pointers a good reason to disable ASLR? 02:01:03 I think it depends on how important that pregenerated memory layout it is, and what sort of attack surface your program has 02:03:34 -!- tromp has quit (Ping timeout: 276 seconds). 02:03:35 Go disables ASLR on their binaries (by default?) because they say the language is memory-safe so it's irrelevant and harder to debug. 02:04:27 doesn't that make it more likely to be vulnerable to meltdown/spectre-type exploits? 02:05:15 That seems possible? 02:05:23 even if you assume the memory safety is perfect, and so is the memory safety of all the language's dependencies (if any; IIRC go doesn't use libc, so possiibly it doesn't use anything else either) 02:05:31 I hear it's often possible to use tricks to get information about memory layout so ASLR isn't that great anyway. 02:05:47 well, it depends on how fine-grained it is, I guess 02:06:18 Programs with long loading times would be the candidates for this. 02:06:35 animalloc (the experimental malloc impl I wrote a while back) makes ASLR fairly useless because it has one random base address per program execution and everything malloc'ed is in a deterministic place relative to that 02:06:38 I hear emacs used to support loading from a core file instead of from scratch, to make load times faster. 02:07:06 but it's still better than being off entirely 02:07:06 Doesn't it still have unexec? 02:07:17 That's what it's called. 02:07:20 https://github.com/typester/emacs/blob/master/src/unexec.c Ah-yup 02:07:25 OTOH, I can imagine a malloc implementation that randomizes the address on every call 02:07:27 Oh, wait, that's ancient bull 02:07:49 a.out format executable 02:07:57 do those even work on modern Linux? I guess they do 02:08:12 Not by default, I think. 02:08:15 Your distro might not compile in support for it. 02:08:21 Computer games are a clear candidate for this sort of thing (maybe particularly on consoles?). But I don't know what occupies most of their load time. 02:08:32 Still, Linux _itself_ has been fantastic at maintaining ABI support. 02:08:44 It's userspace that's been pretty bad at it. 02:08:46 It could be loading assets, which have no pointers and therefore can just load things without pointers into memory. 02:08:55 shachaf: in-game load times, the main bottleneck is sending data to the GPU, I think 02:09:06 pikhq: If only the kernel ABI was all you needed! 02:09:09 it's pretty much being sent a memory image /already/, with no real computations 02:09:15 I mean game startup time. 02:09:35 most games are pretty good about the CPU-related parts of startup time, IME 02:09:45 probably not all of them though 02:09:50 shachaf: I mean, you _could_ static link everything. 02:09:58 Can you? 02:10:06 if there's a long loading screen right at the start, it's probably because it needs some large assets in the GPU to display the title screen 02:10:09 I want to make graphical programs which means I want to use something like OpenGL. 02:10:13 Though if you want to use GL, you are in for a _rough_ time. 02:10:24 By rough you just mean impossible, right? 02:10:29 either that or because it's prefaulting the files it uses from disk into memory 02:10:38 The only official ABI for portable hardware-accelerated graphics is dynamic linking. 02:10:41 shachaf: Yeah, basically. 02:11:18 wouldn't a statically linked graphical application potentially be limited to one type of GPU anyway? 02:11:36 there's no obvious reason why a dynamically-linked-in graphics library would be portable 02:12:08 The fact that the entire GPU interface lives in userspace is fucking awful, but there we are. 02:12:20 OpenGL is a mostly portable API for multiple GPUs. 02:12:32 pikhq: I think it's better than, say, drawing scrollbars in kernel code 02:12:39 (okay, not the _entire_, but a sizable chunk of it is) 02:12:47 ais523: I mean yes, but that's the other extreme, ain't it? 02:12:55 shachaf: yes, but couldn't two different libraries implement that API, each specialised for a particular card? 02:13:03 "bad thing X isn't as bad as other random bad thing Y" isn't really an argument for thing X. 02:13:09 pikhq: yes, but IIRC Windows actually does that, or at least used to 02:13:27 so presumably we have to find a tradeoff point in between 02:13:53 The reasonable approach is the kernel provides an abstract interface for the hardware. 02:13:54 ais523: Sure? Those are the different dynamic implementations of libGL.so. 02:14:07 Like it does for every other device. 02:14:21 Nobody has to dynamically link in a hard drive driver. 02:14:48 The hard drive doesn't run software written in a secret instruction set that you only get a compiler into. 02:15:22 right, you don't even get a compiler for its secret instruction set 02:15:41 but that's typically mostly irrelevant because the hard drive implements a standardised API so you don't need to mess around with the CPU on there 02:15:53 shachaf: Which is itself just absurd IMO. 02:15:55 (some people have got interesting code running on hard drives, though) 02:15:56 But that's how it evolved. 02:16:26 pikhq: I can see potential worries about forwards compatibility, but the real reason is likely to be different 02:16:33 And in Unix, it's not like the GPU devs showed up and decided to do it like this. Oh no. Even ancient X had the driver in userspace. 02:16:58 like, if GPU machine code were public, people might write programs in it directly and then the company would have trouble selling new graphics cards 02:17:04 pikhq: How do you mean? 02:17:10 CPU microcode is probably secret at least partly for this reason 02:17:44 shachaf: I mean that prior to DRM (the Linux API), X11 mmapped /dev/mem and had access to IO ports. 02:18:07 Which meant that X11 was portable to basically every kernel! 02:18:22 pikhq: the latter makes a lot of sense, imagine doing a system call every time you wanted to send a byte to the GPU 02:18:25 ... because it was effectively as privileged as the kernel 02:18:28 The X11 server, but clients had a standard protocol to send drawing commands to. 02:18:36 True. 02:18:44 So from the perspective of someone writing a program it was just part of the platform API. That seems OK. 02:18:56 (Except the protocol presumably didn't allow for very efficient graphics.) 02:19:05 a different split would probably involve some sort of X11 renderer that ran in the kernel 02:19:18 Or a kernel generic framebuffer driver. 02:19:25 Similar to fbdev, with more features. 02:20:09 Though the reason _that_ didn't take off is because back in the day, 2D hardware had a lot of random acceleration features, and it would be very hard to produce a good API that exposed them usefully. 02:20:10 but modern GPU libraries don't want a framebuffer 02:20:15 There are a few other standard ABIs that are only available via dynamic linking, like DNS and user lookup. 02:20:25 But for the most part you can reimplement those yourself with a bit of work. 02:20:33 a framebuffer that implies you're doing all your rendering in software 02:20:35 shachaf: musl supports user lookup without it dynamic linking. :) 02:20:39 s/that // 02:21:03 pikhq: what about when PAM is in use? 02:21:13 musl speaks the glibc nscache protocol. 02:21:23 ais523: Oh, yes, for PAM you're still stuck. 02:23:06 Man. 02:23:12 Time to scrap all of Linux userspace. 02:24:19 And unfortunately, I think PAM wouldn't let you even in _theory_ do a static linked implementation that talks to a daemon that handles the dynamic linking for you. 02:24:24 Given that GPU manufacturers aren't cooperative, what's the best graphics API an operating system could provide? 02:24:51 it'd have to be very extensible, I think 02:25:05 GPUs keep inventing new features that aren't in any API that existed at the time 02:26:18 And designing brand new APIs to support them. 02:26:45 Not that I can blame them; GL isn't a great fit to GPUs anymore. But even so. 02:27:42 So it still sounds like my answer is that I have to do dynamic linking. 02:27:47 I guess one potentially useful extreme would be something like GLSL but a bit more powerful, used to express /everything/ 02:28:09 although that'd be putting a lot of trust in the compiler to recognise what you were trying to do and optimise it 02:28:53 shachaf: What you could maybe do is limit what the dynamic library is allowed to do. 02:29:17 Like, if you forced that .so to have _no_ dependencies? 02:29:48 But libGL depends on libc and whatever else. 02:29:59 Yes, and that's why it's a problem for ABI compat. 02:30:05 pikhq: what problem is that trying to solve? (fwiw, I'm not clear on which problem shachaf is trying to solve with static linking; there's more than one problem it could potentially solve) 02:30:21 ais523: Long-term ABI stability. 02:30:41 Oh, maybe you're answering the "what could a platform do in general" question, rather than the "what can I do right now" question. 02:30:42 Force libGL to have a fixed ABI and no interaction with things that could possibly change. 02:30:52 Oh, yes, that's what I'm answering. 02:31:36 shachaf: as for "what can I do right now", depending on what sort of performance you need you might be able to invent your own very simple graphics protocol + a renderer for it 02:31:37 Sure, minimal dependencies would be better, of course. 02:31:39 shachaf: For now? You're kinda in a rough place; you're more or less forced into relying on Linux's moderately unstable userspace ABI. 02:31:50 e.g. you could just place rendered bitmaps in a particular location in shared memory 02:32:02 if you don't care about compatibility with X, you could even use the Linux framebuffer! 02:32:11 If possible, yes, define your own simple graphics protocol and a renderer if you want to ship a static binary that's long-term usable. 02:32:19 ais523: I use a 3840x2160 resolution so software renderers get slow pretty quickly. 02:32:36 In fact, that's probably your single best bet if you want to ship a Linux binary that'll still be useful in 20 years. 02:32:43 I just want software that's simple and reliable and straightforward to run. 02:33:51 Build software against symbols that are defined in the LSB, and static link in all other dependencies, I guess. 02:34:10 Not a great answer, but that's at least an ABI that's likely to be supported for some time. 02:34:20 (also really limiting) 02:34:31 shachaf: fwiw, my experience with programs that try to provide bundled dependencies has been that they usually end up breaking and can be fixed by forcing my OS's packaged versions of those dependencies rather than the program's bundled versions 02:34:50 and of course there's the issue of security updates too 02:35:33 What sort of dependencies are you talking about? 02:35:40 Problem is, if you don't bundle dependencies you're at serious risk of ABI breakage. 02:35:47 shachaf: the most recent time this happened it was zlib 02:36:06 Because the Linux userspace is only somewhat concerned with ABI stability. 02:36:21 How do you "force your OS's versions" of a statically linked library? 02:36:46 You don't, unless they're being aggressively LGPL-compliant. :) 02:36:59 shachaf: LD_PRELOAD works, but normally the shipped dependencies have been linked in dynamically anyway which makes it even easier 02:37:04 Typically on Linux, if you're bundling dependencies you're bundling .so files. 02:37:11 also the last time this happened I had the program's source available so I just modified it 02:37:21 pikhq: LGPL is bad for many reasons, but one of those reasons is encouraging dynamic linking. 02:37:48 (there was some insanity in the build process; I remember that reimplementing sqrt() myself rather than using the library version turned out to be the easiest solution) 02:38:02 _Really_, if you're targetting Linux, the answer for having code that's easy to run that's still useful in years to come is to ship binaries that work on currently common distros, and offer source that's reasonably portable so it can still be built 10 years later. 02:38:24 I'm presumably targeting every common platform. 02:38:45 Well, on Windows this is an easy question to answer. 02:38:52 Ship a binary. It'll outlive you. 02:39:09 pikhq: I'm not convinced, I've had plenty of experience with Windows binaries breaking on later versions of Windows 02:39:25 I've had programs written against Windows 3.1 break on Windows 98, for example 02:39:33 Okay, granted I'm overselling it. 02:39:39 eventually I got exasperated enough to look for any alternative, and ended up moving to SunOS and later Linux 02:39:43 Really, it will _probably_ work, but there's cases where it won't. 02:39:52 I remember Windows XP could still run binaries from Windows 2. 02:40:04 Past a certain point Windows' ABI compat is best-effort. 02:40:13 shachaf: 32-bit Win10 probably still can. 02:40:31 then my rewritten version for Windows 98 "broke" on Windows XP (it was still runnable but it ran so much more slowly as to be unusable) 02:43:09 …maybe something like WebAssembly is a good way to produce a long-lived executable? 02:43:33 being designed to be portable over anything else is also more likely to leave it stable to old programs, and ported to new OSes 02:43:43 but it basically has no kernel API to speak of 02:44:04 People are now talking about using WebAssembly to deploy code to their own servers and it seems horrible to me. 02:45:33 I actually think WebAssembly is going to be a pretty important technology, maybe not for everything, but for a wide range of applications 02:47:34 shachaf: People be using waaay worse stuff already. 02:48:44 I can imagine people doing something that's k8s-like, except with WebAssembly instead of containers, and with the scheduler actually migrating process states rather than just killing things. 02:49:09 Perhaps with it effectively running as a kernel, rather than having an underlying OS that mostly is irrelevant. 02:49:43 Yes, people are doing bad things, but that's not an excuse for other bad things. 02:49:54 Docker-style containers are scow. 02:52:29 Conceptually, I don't think so. 02:52:36 Basically every implementation detail is, though. 02:52:37 fwiw, I just looked through my old executables and found one written in 2004; it works just fine in Wine 02:55:56 http://nethack4.org/pastebin/BACKGAM.EXE if anyone's interested in trying it out on Windows (the delay was me running it through local ClamAV, followed by VirusTotal, to make sure it wasn't infected) 02:56:07 presumably they still have viruses from 2004 in their definitions :-) 02:56:47 Win10 says "This app can't run on your PC." 02:57:04 Disappointing, but at least that's a very clear error message. 02:58:06 $ file BACKGAM.EXE 02:58:07 BACKGAM.EXE: MS-DOS executable, NE for MS Windows 3.x 02:58:48 I'm on 64-bit Windows, so there's that. 02:59:09 whoa, not even PE? 02:59:10 I'm not sure what file's output even means 02:59:20 is that a specific sort of PE? or something older 02:59:35 the first two bytes are MZ so it's probably PE 02:59:42 That's an older binary file format. 02:59:45 "New Executable". 03:00:28 It was the successor to the MZ file format, used by Win16 and OS/2. 03:01:15 The MZ bytes are because NE _also_ supports a DOS MZ stub. 03:02:05 -!- ais523 has quit (Quit: sorry for my connection). 03:02:21 -!- ais523 has joined. 03:02:46 presumably it's a 16-bit executable, then 03:02:46 Are there any technical advantages to ELF or PE? 03:03:12 there are big technical advantages to arbitrary section support, which I'm not sure ELF's predecessors had 03:03:36 I am pretty sure PE does not support that. 03:03:38 do you mean ELF vs. PE, as opposed to ELF and PE compared to their predecessors? 03:03:43 I meant ELF vs. PE. 03:04:03 PE is a Windows variant of COFF. 03:04:13 COFF is the binary format ELF was built to improve upon. 03:05:08 Among other things, PE isn't really built to handle relocations in the way that ELF does... 03:05:20 How do you mean? 03:05:21 You can do relocations, yes, but they're all textrels. 03:05:33 PE relocations are weird 03:05:51 Does that mean modifying the text section? 03:05:56 On x86, each DLL is built for a given base address, and if it has to be located somewhere else you get a process-specific copy in memory. 03:06:13 before ASLR the default was for each individual executable and library to choose a preferred base address and relocations would only happen if they overlapped 03:06:23 but if they did there would be a need to relocate everything 03:06:28 On x86_64, the code's just intrinsically PIC, so this doesn't happen 03:06:37 like, every address in the code needs changing 03:08:20 hmm, after reading up a bit on PE, I have decided it is insane 03:08:58 ELF has been adopted by most operating systems, not just Linux (although Mac OS X uses its own thing) 03:09:28 ELF, while somewhat more complex, isn't a bad format, and it's suitably general that it is likely to just work for your use cases. 03:09:43 PE does support arbitrary sectiions, though 03:09:56 PE supports specifying your desired stack size, which ELF doesn't seem to. 03:10:06 But it seems there may be a GNU extension (which Linux ignores so it's irrelevant). 03:10:18 https://en.wikipedia.org/wiki/Comparison_of_executable_file_formats seems useful 03:10:39 Oh, huh. 03:11:40 Oh, I should've thought to look there. 03:11:48 PE files can contain an icon! That's a big advantage. 03:13:07 I don't see a reason an executable program should be more than one file by default. 03:13:16 Including .so files with your program just seems ridiculous. 03:13:54 I do like the Windows resource mechanism, although only for the purpose of providing metadata for use by other programs (typically shells and the GUI equivalent of shells) 03:14:07 Windows uses the same general mechanism for lots of purposes, some of which are insane 03:14:26 I know someone who includes information about command line arguments in an ELF section. 03:14:28 "this is my icon" seems like a reasonable use though 03:14:50 shachaf: I think there should be a machine-readable command line arguments standard 03:15:05 I agree. 03:17:08 fwiw, the icon system on Ubuntu (and presumably other distributions?) appears to allow the desktop theme to override icons 03:17:33 which is mostly used for generic icons like "text editor" that multiple programs are allowed to use 03:18:31 I think the program shouldn't need icons, and just the text is good enough mostly 03:19:43 Yeah, that's Freedesktop behavior I think. 03:20:12 "ownership" of programs seems to be very different on different OSes 03:20:31 for example, Windows < 8 used to organize programs on the Start menu by manufacturer 03:20:49 and they're probably still organised that way in Program Files 03:21:18 whereas the FHS suggests chopping them up into pieces, storing each file in a directory appropriate to its purpose 03:21:49 The FHS is also assuming most of the FS hierarchy is owned by the OS. 03:22:16 If it's a completely third-party program it's supposed to be in /opt/foo, where it's owned by the program, or in /usr/local, where it's owned by the system administrator. 03:23:28 ooh, that's a good way to express the difference betwen /opt and /usr/local 03:24:06 this implies that packages should aim for /usr if being uploaded to the OS package manager, and /opt if being sideloaded, right? 03:24:21 (with /usr/local being for things that were built manually) 03:25:55 I think so, yeah. 03:26:22 <\oren\> http://orenwatson.be/v0tgil.htm#vocabpics 03:27:02 I'm confused. Is that supposed to be a phonetic spelling? 03:27:39 Oh, this isn't an English thing, never mind. 03:27:45 <\oren\> it's vötgil, a IAL that's so bad it's hilarious 03:28:15 <\oren\> and therefore, I have started learning and using it and making resources about it 03:28:31 whoa, https://en.wikipedia.org/wiki/ALGOL_58 03:28:40 I didn't know there was such a thing as ALGOL 58. 03:29:10 Have you considered the following novel thesis: 03:29:15 Good things are better than bad things. 03:29:20 <\oren\> When was the first actual compiler for ALGOL made 03:29:31 \oren\: This is... not a good language. 03:30:00 "Every word in Vötgil has exactly three letters" 03:30:06 <\oren\> wait till u see the grammar 03:30:09 Ah, I _see_, you're doing Toki Pona only less well 03:30:49 <\oren\> nah the grammar has infinite recursion,unlike toki pona 03:31:14 Wow, that grammar is actually pretty shit 03:32:16 <\oren\> also it differs from toki pona in having lots of words for manmade objects 03:32:32 And in not having much of a point, good or bad 03:32:49 (I mean, Toki Pona is silly, but at least it has a reason for being the way it is) 03:33:00 and Minecraft support? or does Toki Pona have that too? 03:33:15 Like, it looks like it's coming from someone who only knows English, but thinks they're gonna be clever by being overly simplistic about it. 03:33:24 <\oren\> I think there is a mincraft localizationfor toki pona 03:33:57 Hah, caaaalled it. The vocab is all supposed to be English-cognate. 03:34:09 <\oren\> well, the funny thing is, it looks like english up until you interpret eisenmann literally 03:34:35 <\oren\> he says, e.g. a descriptor always goes before the word it describes 03:34:45 <\oren\> so you have to say the bird red is 03:36:08 <\oren\> and a preposition goies between the words it modifies 03:36:37 <\oren\> which means a preposition has to attach to the verb, *before* the direct object 03:37:06 lol 03:37:49 So, either the grammar works nothing like he intends, or he is just bad at describing the grammar in a clear way. 03:37:56 <\oren\> I make with wood a house 03:38:03 wow that grammar is bad 03:38:42 <\oren\> the grammar is based on me interpreting eisenmann literally and paying no attention to his example sentences 03:38:59 That is at least the more interesting version. 03:39:07 Because his examples seem to suggest "lol it's English" 03:39:13 <\oren\> because if I went by his examples it would just be english 03:39:38 <\oren\> he's since moved on to breadspeak 03:39:52 <\oren\> (yes, that's the name of his current conlang) 03:40:19 something seems odd about a conlang whose vocabulary is so English-inspired 03:41:20 <\oren\> I mean,volapük was also full of clipped english words 03:41:43 <\oren\> with some german ones like klig (war) 03:41:50 Esperanto is a kind of consensus-Indo-European, which makes more sense 03:43:41 Though consensus-Indo-European as viewed by a Russian speaker, which is a bit weird phonotactically. 03:44:22 <\oren\> Another interesting thing is vötgil has morpemic roots for such meanings as "foot" "pound" "dollar" "mile" and "gallon" 03:45:31 * ais523 is wondering if it would be possible for a conlang to actually be a con, as in confidence trick 03:46:02 What, like Interlingua? 03:47:11 -!- tromp has joined. 03:51:19 -!- tromp has quit (Ping timeout: 250 seconds). 04:59:03 -!- ais523 has quit (Quit: quit). 05:26:05 Does your text section have to be called .text? 05:32:08 As far as i'm aware, i don't think so? Why not test 05:33:34 -!- oerjan has joined. 05:35:03 -!- tromp has joined. 05:36:19 Well, it's certainly not true for executing ELF files, since sections are ignored entirely. 05:36:30 I was wondering whether there's some situation where some linkers expect it. 05:40:01 -!- tromp has quit (Ping timeout: 276 seconds). 05:53:47 it's interesting that it miscounted by 5 initially? <-- that's because you didn't use `1 initially, so there was no 1/3: in it 05:56:25 iow, `1 and `2 share the same chopping into lines, `2 just starts displaying the second one. 05:58:26 `cat bin/2 05:58:26 ​\` "$@" |& sport 2 06:12:46 Hmm. 06:12:57 Shouldn't `2 handle that case? 06:34:00 -!- tromp has joined. 06:41:50 if it did, then it would break if you cycle back to line 1 because now it's longer than the cutoff 06:42:21 It could break that line into two lines and then show line 3. 06:42:30 That's certainly not confusil. 06:42:36 CERTAINLY 07:13:10 -!- tromp has quit (Remote host closed the connection). 07:28:59 -!- tromp has joined. 08:31:00 -!- Lord_of_Life has quit (Ping timeout: 248 seconds). 08:32:48 -!- Lord_of_Life has joined. 08:43:14 -!- AnotherTest has joined. 08:48:07 -!- tromp has quit (Remote host closed the connection). 08:56:23 -!- tromp has joined. 09:29:41 -!- oerjan has quit (Quit: Later). 09:51:20 -!- wob_jonas has joined. 09:51:52 -!- tromp has quit (Remote host closed the connection). 10:01:28 -!- tromp has joined. 10:01:30 ais523: right, and that UB rule of C is why we can't write the statement of 3SP as just "a[a[a[x++" in C, 10:01:46 as just "a[a[a[x]]]++" in C, 10:01:57 and this impacts some other simple one-instruction languages too. 10:04:25 ais523: no, a.out is no longer supported by the linux kernel. this is recent enough that you may be running an old enough kernel in which you can compile an a.out executable, but the harder part may be to install a libc that works with it. 10:04:41 I don't think I've ever seen an a.out executable alive, I only read about them in history books. 10:07:01 video game loading times => I think if the loading time matters in first place, then the game is loading something wrong, eg. having long repeated unskippable cutscenes such that speedrunners reset the game rather than wait for the cutscene, or frequent crashes or freezes or hardlocks that you can recover only by restarting the game 10:09:33 pikhq: the hard drive driver is in the kernel because the kernel needs it to boot, and the file system driver is in the kernel because people demand swapping into files so we can't swap the file system driver out "https://esolangs.org/logs/2019-08.html#lqVb", plus also to limit permissions of file access, so that a process can't just read and write 10:09:33 every file it wants to. 10:10:10 for a GPU, I'd prefer if most of the driver was in userspace, with only as much in kernel space as required so that the prorgam can't do something malicious, and that if the program crashes, the kernel can reset the gpu to a usable state 10:10:55 that still means some nontrivial driver in the kernel, but I would prefer if all other things were in userspace libraries installed by my distribution. 10:12:21 and note that even for the file system, mkfs and fsck and tune2fs and fdisk are still in userspace 10:13:26 writing the two file systems plus two fs extensions that you burn onto CD or DVD, plus all the knowledge about writing audio CDs, those are also all in userspace 10:14:07 the kernel knows how to read file systems from a CD or DVD because we want to boot rescue systems from them and possibly even run live systems from them 10:17:13 " There are a few other standard ABIs that are only available via dynamic linking, like DNS and user lookup." => yes. and do you know which part of glibc had a critical buffer overflow bug? the one that parses numerical ip addresses. so if you had DNS lookup statically linked into executables, then all my old executables would still be vul 10:17:14 nerable even after I have debian update glibc with the security patch. that, or I'd have to have debian update every fricking executable that has DNS lookup inside it, which is a lot these days. 10:17:43 there's a good reason that those nontrivial parts of glibc, which don't need to be inlined or anything, those are in a dynamic library. they should be. 10:19:02 and debian is right to put only one copy of every library on my machine, as a dynamic library, as much as possible, not just to conserve disk space, but to be able to automatically update those dynamic libraries with ABI-compatible updates, and they'll work immediately after exec, even for prorgams that I've compiled myself. 10:20:16 and I've seen DNS lookup reimplemented, yes, and in that particular case it can even make sense because libc doesn't provide a non-blocking interface, but there are broken reimplementations that don't interpret your config files the way they should, in which case you get subtly broken and hard to debug problems in any non-default configuration 10:22:03 also, I would like to keep the kernel small so that I don't have to pay for features I don't use in locked kernel memory, because not every addition can be put into a swappable module, and people keep breaking the swappability 10:22:35 we used to have kernels that were just one megabyte long, I'd like that back 10:22:58 (one megabyte long with full reiserfs support) 10:23:47 glibc is a mess and every time I have to read the code for anything in it I'm sad. 10:23:52 " and of course there's the issue of security updates too" => excatly 10:24:07 Still not as sad as when I have to read anything in /usr/include/c++/, of course. 10:24:12 shachaf: yes, it is, I'm not saying you should put everything in glibc, more like that you should put everything in good userspace libraries 10:24:33 shachaf: why? do you get underscore sickness? 10:24:41 Underscores are only a small part of it. 10:24:55 If there were good userspace libraries I'd be more sympathetic to that advice. 10:25:06 don't worry, it will get worse when the macros will have to work around that ((void)a,b) can now call an overloaded operator if b is a fancy type 10:26:06 Why are you saying not to worry because it'll get worse? 10:26:12 I'd prefer for it to get better. 10:27:57 " …maybe something like WebAssembly ... being designed to be portable over anything else" => didn't they promise that about Java too? each of the past versions of Java that is? 10:29:12 WebAssembly is surely a better prospect than the JVM because its memory model is an array of bytes instead of a heap of garbage-collected Java objects. 10:30:57 " Including .so files with your program just seems ridiculous" => that's because more than one program can use the same .so, and also I can update a .so without updating the main program or backwards 10:33:36 I hate Toki Pona 10:33:57 I think it's because of the LGPL or because of everything being bad and people just doing whatever. 10:36:39 -!- cpressey has joined. 10:39:47 shachaf: I don't think it's because of LGPL that libc is bad 10:42:26 People get things to work and don't care that the result is ugly because, well, they work. 10:42:54 cpressey: sure, I know, I work with computers too, I know how it goes 10:42:56 This is by no means unique to software. 10:43:24 the next project is always urgent, there's never time or motivation to clean up the previous one to work properly 10:43:59 cpressey: sure, but I don't want to know how every other industry works. I want to be able to drink beer and eat sausage, which is why I don't go to the factories where those are made. 10:44:10 Why people think they need more and more new versions of C, though -- that, I'm not sure of. 10:44:54 Aha. At least C18 was basically a bugfix update. This makes me relatively happy. 10:45:47 C2x looks like a trainwreck though. 10:45:50 cpressey: the original C was tailored for the computers that existed back then. C99 made it official that the sqrt function, which calls *one* instruction, doesn't have to update the thread-specific errno. It and C11 also add a bit more modern floating-point maths stuff like that, which I think is good. 10:46:30 They also added clear threading semantics, which didn't exist before, plus threading and mutex+condvar and atomic primitives to have yet one more standard of those. That part is still fine. 10:47:31 That they also added variable size arrays was probably a mistake. The designated initializers was probably a reality check because the linux kernel team decided that they will be using it and aren't letting it go, so it was best to just standardize it rather than leave it as a gcc extension. It's not like msvc cares about the C standard anyway. 10:47:58 I don't know anything about C18. 10:49:06 I'm only going by what I saw when glancing at Wikipedia articles. 10:49:46 Oh yeah, they also added those msvc-specific "safe" library function thingies in the standard, where if you use memcpy you get a warning, and instead you have to pass the size argument twice to be sure it doesn't run over the buffer size or something. 10:50:46 That's also nonsense, but again, msvc implements it by default anyway and MS will force their own coders to that standard, everyone else can still use the normal functions and suppress their warnings, so it's not like it matters what the C standard says. 10:51:26 It's really the tricky threading and pointer semantics and that sort of thing where the C standard matters, because that's what people will look up there. 10:52:19 For some extra functions or missing functions, the compilers or libraries can always say that they don't implement that, but if everyone used different pointer aliasing semantics or different threading rules, that would be a chaos. 10:53:03 What C needs most is better static analysis, to actually prevent the buffer overruns and use-after-free's and so forth. 10:53:29 cpressey: I don't think that's the task of C. that's the task of higher level languages like rust. 10:54:19 Erm well, then you could say, what the world needs is to stop using C so much. 10:54:44 cpressey: no, some people can use C just fine without making their code full of stupid bugs 10:54:51 not me, but some programmers can 10:55:08 all the linux kernel people mostly manage it too 10:55:49 -!- xkapastel has joined. 10:55:53 you don't want to force everyone to have to prove to the compiler that their code is right, because sometimes they (eg. the linux kernel guys) need to optimize something so much that they really can't write a machine-readable proof 10:56:23 most of the time, when I write code, I don't mind the bounds-checked and memory safe operations, even if they're slower 10:56:50 for the few cases when I do need to optimize the code, I can override those checks, in rust as well as anything, or write the inner loop in C or C++ 10:57:04 I'm of the opinion that if you need to optimize it *that* much, you probably should just use assembler anyway 10:57:20 but if someone mostly wants to write code where he doesn't want to write proofs, then it's fine if they want to use C 10:57:27 I don't recommend it to most people, but some people do manage it fine 10:58:12 -!- tromp has quit (Remote host closed the connection). 10:58:23 cpressey: I disagree, there is an intermediate stage where the compiler can produce the right code if I use the right (possibly nonportable and likely machine-specific) incantations 10:58:41 but sure, sometimes you want to write the code directly in assembler, as ais523 says 10:59:05 still, to even be able to interface the assembler code with your high-level code safely, you need a language with an unsafe interface, like C or similar 10:59:22 wob_jonas: No, people ship .so files with their executables because of the LGPL. 10:59:42 it would be really inconvenient to do the boundary part if all you had was mandatory memory-safe high-level languages and machine code 10:59:43 cpressey: They work but they don't work well. 11:00:02 you'd effectively need to define the rules of C, perhaps without the compiler and concrete syntax, to do it 11:00:08 to even describe the interfaces 11:00:35 and note that gcc can do inline assembly where stuff is passed to assembly code through specific registers, and even condition flags now 11:00:40 which really isn't trivial 11:01:01 (the gcc syntax is a bit awkward, but if you deal with writing your own assembly code, you can learn it too) 11:01:07 `? =ccc 11:01:09 ​=ccc? ¯\(°​_o)/¯ 11:01:42 `? =@ccc 11:01:43 ​=@ccc is a great innovation in gcc 6, kept top secret, where inline asm statements can return a value in the carry flag on x86_64. See https://gcc.gnu.org/gcc-6/changes.html which keeps this secret, https://gcc.gnu.org/onlinedocs/gcc-6.1.0/gcc/Extended-Asm.html , https://marc.info/?l=linux-kernel&m=143786977730804 . 11:02:05 ^ that's one of the secret incantations, the other is the syntax to pass a variable to inline asm in a specific register 11:03:44 but more specifically, some of the time I want to write loops of explicitly vectorized code, or code with unsafe memory accesses, but I don't want to deal with the register assignment or scheduling part, in which case a C or C++ compiler with machine-specific code orks better for me than writing any assembler 11:04:08 I have done this in my previous job for image processing stuff 11:05:20 and if you know what you're doing and willing to take the risk of premature optimization or pessimizing your code, then sure, write assembler code if you want 11:05:30 I may also meet a case when I need to 11:09:53 Right, everyone who isn't in a position where they need to hand-vectorize code to meet their performance requirements needs to stop using C. 11:10:35 cpressey: I was using C++, and not portable one either 11:10:57 one that basically tells which AVX instruction to use 11:11:16 and optimized for the AVX instruction set 11:13:12 hopefully soon I'll be able to write that directly in rust 11:13:33 they already have a decent backend for the compiler 11:13:49 and they added some of the x86-specific stuff 11:14:27 not as good as C++ for that sort of thing yet, but improving 11:14:41 (it just needs a fucking printf) 11:17:01 -!- tromp has joined. 11:17:22 "you don't want to force everyone to have to prove to the compiler that their code is right" -- No, actually, as the person who is going to *use* the software, I kind of *do* want that 11:18:33 cpressey: most of the software I write is throwaway one used for research that nobody else will run on their machine 11:18:48 I care about that use 11:19:16 and care about the kernel and image editor and video compressor and other programs doing all their nonsense efficiently too 11:19:22 i love that rust ships with a complainer 11:19:47 not the browser, or the tax filing software, mind you 11:19:58 myname: perl does too, it's an old idea 11:20:02 doesn't usually help much 11:20:14 well, depends on which complainer you mean 11:20:18 perl ships with only one of them 11:20:34 rustc of course ;) 11:20:38 cpressey: ah but are you willing to pay a reasonable price for that? 11:20:40 do you mean the one that gives the long explanation for error messages, or the lint-like tool? 11:21:15 mainly the type- and memory-checker 11:21:19 love it 11:21:32 myname: sorry, I mean the former is rustc --explain, the latter is clippy 11:21:45 but apparently you just mean that rustc gives decent error messages 11:21:46 cpressey: For the time being, I think proving code correct is one to two (decimal) orders of magnitude more time consuming that just writing it. 11:21:59 don't worry, it will no longer be able to keep it up once they make the generic system powerful enough 11:22:08 the two features are exclusive 11:22:11 wob_jonas: i mean that rust enforces you to write clean code by design 11:22:28 (And more difficult too) 11:23:07 myname: no, it doesn't enforce. it lets you write clean code if you want to, but also lets you override the safety checks if you're willing to learn about all the UB rules and magic low-level stuff (MaybeUninit, UnsafeCell, etc), which are very different from the C++ ones 11:23:16 that's what I like in it 11:23:25 python is sort of like that too by the way 11:24:20 the fricking indents are still putting me off about python, but I've determined that it's possible to extend its syntax to allow normal braced code, in a compatible way. I should implement it, install it to my machine so I can do proper python one-liners from the shell command line, 11:24:34 install it to HackEso, document it, and submit it to the python guys to perhaps get it adapted officially. 11:25:08 I mostly ignored python because the indents bothered me so much, but now that I took a look at it, I see it's become a good language (admittedly it took some time, just like with rust and C++) 11:25:42 i hate the errors i make in python 11:26:20 like, "foo"+bar with bar beint a number will throw an error. i kinda get that, but i hate when it happens 11:27:11 wouldn't it be grand if there was a type checker to catch those... 11:27:20 int-e: tests are also more time-consuming to write than code (though perhaps not a full OoM). In the absence of a proof, I'll settle for tests. 11:27:53 It's when there's no proof and no tests I start to really worry, because that means there might not even be a specification 11:27:55 int-e: that's my point. a type-checker would be great. or just auto-converting to a string, maybe with a message in stderr 11:28:05 and with no specification, what are you even doing 11:28:05 cpressey: Yeah I'm afraid tests will remain state of the art for some time still. 11:28:11 but no, python decides to play along until it doesn't 11:28:54 int-e: Well, if you can prove some important, smallish properties of the code, that's a start. Type systems, basically, are that, aren't they. 11:29:29 is prooving code in production really something people do? 11:29:50 myname: Yes. 11:29:51 cpressey: (simple, HM-alike) type systems are a sweet spot because for the most part you just have to write down the assertions and can leave proofs to the compiler. 11:30:31 cpressey: interesting.have to have a look if there are useful frameworks for that 11:30:34 myname: Amazon something something AWS, Microsoft something something USB stack, I'd have to find the references 11:30:38 cpressey: but there's also such a thing as too many tests. have you ever manged to run the ghc testsuite or the gcc testsuite? 11:30:53 cpressey: Dependent type systems leave that sweet spot behind (type inference doesn't work anymore) so I'm rather skeptical of those. 11:31:02 I haven't. I tried, but they take a very long amount of time to run. 11:31:34 int-e: Liquid Types are a sweet spot between dependent types and HM :) 11:31:42 i assume haskell to be relatively easy to proof first order properties for 11:32:22 i don't know where to even start for something like java without a full-blown model checker 11:32:27 some software does it well, where it has a smaller testsuite and a full testsuite. in some sense gcc does that too, with its three-stage compile: it compares the second and third stage binaries to each other, ignoring a few bytes that are allowed to differ, and they must be identical. 11:32:37 cpressey: hmm new keyword for me 11:32:56 or possibly buzzword, need to see about that 11:33:05 int-e: https://wiki.haskell.org/Liquid_Haskell 11:33:35 what's that... 11:34:38 ok 11:36:15 -!- tromp has quit (Remote host closed the connection). 11:36:35 Meh, it mentions "stack". :-P 11:36:43 int-e: I agree "Liquid Types" is a horribly-hip-sounding name. 11:37:02 Short for "logically qualified types" I believe 11:38:04 https://www.microsoft.com/en-us/research/video/liquid-types/ agrees 11:38:30 myname: The emphasis in industry is usually on reactive systems, so, model-checking state-machine-like descriptions is popular (TLA+ and Microsoft's P language come to mind) 11:39:42 okay, should be fairly easy there 11:40:13 "What Are 10 Examples of Liquids?" - "liquid types" is not the perfect search term. 12:06:26 I want to wait until there is more to read on one of two files I have open. C: use select(). Haskell: you must write Concurrent Haskell program now, oh and make sure to use STM because MVars have race conditions 12:08:29 ?! 12:08:29 Maybe you meant: v @ ? . 12:08:40 "MVars have race conditions"? 12:09:15 cpressey: Maybe you mean deadlocks? 12:09:19 do you have any specific questions? 12:09:34 "They are appropriate for building synchronization primitives and performing simple interthread communication; however they are very simple and susceptible to race conditions, deadlocks or uncaught exceptions." 12:09:51 I don't get the race conditions angle. 12:09:56 if you use them wrong, sure 12:10:19 The other two, yes, they are true. 12:10:24 All of Control.Concurrent seems to read like this. "We have these but you shouldn't use them, really" 12:11:43 You have to think about lock dependencies with MVars. 12:11:59 -!- tromp has joined. 12:12:20 (Assuming you use MVars as locks. MVars are also mailboxes, message queues of length 1.) 12:12:55 you also have to think of deadlocks when you use them as queues 12:13:13 MVars are *simple*. STM is a huge black box. 12:13:13 travelport has a full-blown api for quite some time where the documentation just says "this is not supported by any of our data backends" 12:13:20 wob_jonas: sure 12:14:37 All I need is select(). There used to be an hSelect, they got rid of it. There's a package on hackage which is an FFI to select which is probably the best I can do. 12:15:15 cpressey: So, again, I agree with that quote, except for the race conditions part, which to me suggests a completely broken MVar implementation. 12:15:28 do you have any opinion about eta/frege? 12:17:10 -!- tromp has quit (Ping timeout: 276 seconds). 12:18:11 cpressey: Or maybe I do have an idea. It used to be the case that "readMVar" could block (namely, if you use the MVar as a 1-slot queue, so there can be several pending writers.); maybe "race condition" alludes to that kind of phenomenon... 12:19:20 * int-e sighs 12:20:08 or people just write bad code with MVars, which has race conditions that put their own high-level data in a corrupt state 12:20:25 (Concrete scenarios rather than keywords would help. The keywords are just scary and cannot be filled with content unless you spend a lot time thinking up scenarios... or happen to know them already.) 12:20:44 yeah, specific questions are better 12:20:54 myname: I'm not generally a fan of languages that are slight variations on other languages. 12:20:55 especially with code samples and expected output and stuff like that 12:21:06 Do we have real-life stories of STM livelocks? 12:21:38 Or maybe just a scenario where the rts spends 99% of the time in managing the STM transactions, so nothing productive happens? 12:22:06 cpressey: valid point, but both are not really aimed to be slight variations afaik 12:22:17 Or is STM so complicated that it's only used by people who know what they're doing... and who, in particular, keep their transactions short so that this is a non-issue? 12:23:07 myname: I'm thinking more of Fay and Elm which are only "Haskell-like". I got the impression Frege was "not actually Haskell" too, but it might be the wrong impression. (Eta's website make my browser complain about phishing risks or something) 12:23:58 "Frege is a Haskell for the JVM." ... What does it mean to be "a Haskell"? 12:24:16 cpressey: In any case I'm perfectly happy to use MVars in simple cases, like waiting for a worker thread to finish, or managing exclusive access to a mutable data structure. 12:24:19 If it was "Frege is a Haskell compiler for the JVM" that would be fine 12:25:27 cpressey: Maybe you should learn you a Haskell for the greater good! 12:25:50 (typing this is painful) 12:29:04 both attempt to compile haskell for the jvm with different approaches 12:29:17 frege does overload the dot operator, for example 12:29:29 like, foo.bar is different from foo . bar 12:29:59 I guess I could just take a semi-non-trivial Haskell program and try to run it in Frege and see if it complains or not. 12:31:29 int-e: I would believe that STM scares off everyone who is not already rather expert at writing concurrent code, yes 12:40:59 -!- arseniiv has joined. 12:52:06 -!- Vorpal has quit (Ping timeout: 258 seconds). 13:23:02 I'll probably just use Control.Concurrent.Chan when it comes to it, it should be fine (single reader, multiple writers, nondeterminancy is okay). 13:26:36 cpressey: Yeah the only problem with Chan is that it doesn't prioritize the reader when the channel gets full (a key difference to channels in Erlang). 13:27:51 STM doesn't solve that one, and bounded channels are comparatively crude. 13:33:17 int-e: I don't see any mention of what it means for a channel to become full, in the Chan docs 13:34:41 In fact it mentioned the word "unbounded" 13:34:53 Yes. Which hides a problem... 13:35:31 ...namely, if your reader cannot keep up with the producers, you have a (kind of) memory leak at your hands. 13:35:56 Ehm. Technically yes. In practice, this is unavoidable though, right? 13:36:23 Sure but you can be more clever about it, if your scheduler knows about channels. 13:37:02 In some cases, the producer can block, or detect the problem and drop frames 13:37:03 I'm not saying it's a big problem. Just something to be aware of in a corner of your mind if you rely on channels heavily :) 13:38:02 Sure. 13:38:20 (And something that I found surprising when I first heard about it.) 13:38:49 (And that's the main reason I bring it up. It could be a pretty nasty surprise.) 14:39:28 -!- ais523 has joined. 14:40:39 wob_jonas: wait, how is a[a[a[x]]]++ UB? the increment has to happen after all the array dereferences 14:41:31 ais523: isn't it an UB if it increments a[x] or a[a[x]] because the indexes coincide? 14:41:52 hmm wait, maybe it's not 14:42:08 no, because the dereference is sequenced before the increment as the increment uses its output 14:42:15 that needs a language lawyer, I think 14:44:33 cpressey: in terms of "just write your code in asm", I'm often tempted, but ideally I want to write a high-level description of the code so that it's portable and easy for a human to see it's correct, but want it to compile down to efficient asm on at least one commonly used platform 14:44:46 I mean this is in an area where I can hardly imagine a compiler going wrong, but I have no clue what the UB rules say about this. 14:44:53 let me look this up, maybe this applied only to my language where the main loop included a[a[1]++]-=a[a[1]++], which was an UB because it sometimes had to write into a[1] with the -= operator 14:45:20 but I think the post and preincrement has or had extra UB rules more strict than for other assignments 14:45:29 (it just needs a fucking printf) ← there's print!(), which isn't quite the same; are you doing one of the few things where the difference matters? 14:45:32 so that they can be rearranged by the optimizer 14:45:54 wob_jonas: /that/ example is UB because the ++s aren't sequenced with respect to each other 14:47:21 cpressey: For the time being, I think proving code correct is one to two (decimal) orders of magnitude more time consuming that just writing it. ← I've heard 1 order for enough testing to be fairly confident it's correct, 2 orders for an actual proof that it's correct, 3 orders for a proof that it's correct that can be machine-verified to not contain a fallacy 14:47:42 I've actually written code in the last form, it taking 1000 times longer than expected seems about right 14:48:19 (the programs being verified were very simple and would normally take less than a minute to write) 14:48:42 I think I include some amount of debugging and testing in the "just coding" category already. 14:49:14 yes, but you can have a project that's heavily tested and yet still be confident that it's wrong 14:49:15 And sure, the factor is worse for programs that you already know how to write :P 14:49:20 because the spec implies a lot of hidden complexity 14:49:45 "compile all C programs correctly", for example, is a nice simple spec at the surface level, but when you dereference the reference to the C standard, suddenly it's hard to be confident that your spec is correct 14:50:06 Because, ironically, you will *not* know in detail *why* they are correct; you'll be replicating an internalized pattern instead. 14:50:34 anyway, this is why I think declarative languages are so valuable: the program /is/ the spec, so the only way the program can be wrong is if the spec is wrong or if you translated the spec into code incorrectly 14:51:08 ais523: I think that's also kind of a weakness though. You can't check a program against itself. 14:51:12 (I have also written formally verified code. It wasn't trivial, and I think a factor of 100 is pretty much what we got.) 14:51:24 machine-checkable. 14:51:51 (Well ok you CAN, but it's a tautology.) 14:51:55 cpressey: I guess you'd need to get two different people to write the /specification/ and see if the programs matched 14:52:03 which would be useful, because buggy specifications are very common 14:52:15 I did a thought experiment about FizzBuzz recently 14:52:26 ais523: Basically yes, it's kind of like coding theory. 3 specifications would be ideal, you could go with majority-rules 14:52:29 Oh, target language makes a difference... we exported code to Haskell in the end, and purity helps. 14:52:45 (Kind of expensive though) 14:52:49 If you want mutable data structures, things get worse. 14:53:35 you start with a specification like "for all the numbers from 1 to an input n, output 'Fizz' if the number is divisible by 3, 'Buzz' if the number is divisible by 5, the number itself if it's divisible by neither 3 nor 5, and a newline unconditionally" 14:53:56 "the program /is/ the spec" <-- in many cases that's only true if you don't care about performance... 14:54:17 then you start thinking about issues like, "OK, where does this output go? What file format is it in? Is this meant to be readable by humans or computers? What about internationalization?" 14:54:47 "say there's an error halfway through the fizzbuzz, do we delete the output so far? leave it there and make the program resumable somehow?" 14:55:02 "how do we parallelise this?" 14:55:12 int-e: I see that as a deficiency in the languages more than anything else 14:55:13 ais523: ok, I think you're right about the 3SP case, that one is fine to be written as a[a[a[x]]]++, even if the indexes may coincide 14:55:47 and so are other cases where there's only one assignment and that's the outermost expression 14:56:15 ais523: Well, you can definitely give a specification which doesn't entail any particular efficient implementation strategy; how should such a language pick one? 14:56:30 at least according to the C11 rules 14:56:39 I mean, ideally, yeah, you could give it hints in comments or something, but - far from an easy problem 14:57:02 ais523: Uh, I disagree, at least given our current imperfect state of compilers. I want, as a programmer, control over the algorithm used to solve a task. 14:57:44 So that will be part of the program... 14:57:54 cpressey: I don't think it's impossible that computers will eventually be better at that than humans 14:58:06 Maybe in an ideal world where we could solve the halting problem my attitude would be different ;-) 14:58:33 one thing I noticed a while back is that if you just run all possible algorithms in parallel (assuming finitely many), and terminate when one ends, you get the best possible big-O performance (and a terrible constant factor) 14:58:46 Yeah that's a classic. 14:58:49 and thought this might actually be viable in a golfing language 14:59:21 It usually comes up in the following form: "We can write a TM that solves SAT in polynomial time iff NP \subset P" 15:00:10 (the point with NP being that we can check in polynomial time whether we've succeeded) 15:00:20 by generating programs and executing them in parallel until one of them comes up with a valid solution, then you don't care how it was made 15:00:46 also I'm pretty sure you don't mean \subset there, you probably mean either = or \subseteq 15:00:50 (which is something you need, really; "all possible algorithms" need to be filtered for correct algorithms for your task at hand, somehow) 15:00:56 (which are equivalent in this case because P \subseteq NP) 15:01:24 Yes I mean \subseteq. 15:01:49 I wanted to write NP (= P :P 15:02:22 * ais523 suddenly realises that (= and =) as operators would not be ambiguous in a C-like language, although )= would be 15:02:36 (which Isabelle/jEdit expands to ⊆.) 15:02:46 err, LALR(1)-ambiguous, that is; )= might be unambiguous if you have a generael parser 15:03:54 . o O ( a =( b && c )= d <-- it would be horrible nontheless ) 15:04:14 *nonetheless 15:04:18 Still waiting for the esolang that uses Earley or CKY to find all possible parses and interpret them all in parallel 15:04:19 int-e: you wrote =( not )=, which is what the unambiguity relies on 15:04:45 cpressey: that's probably only interesting if this is somehow the only way to gain TCness 15:04:59 ais523: did I :) 15:05:13 anyway... let's drop this 15:05:23 it's a tangent of a tangent anyway 15:05:33 (not speaking geometrically) 15:05:34 isn't that what the channel's about? :-) 15:05:57 geometrically, a tangent to a line is the line itself, and tangents are lines, so you can't stack them more than one level 15:06:34 indeed. hence the qualification. 15:06:58 anyway, I had ideas about a declarative golfing language which makes a guess about what order to run the commands in 15:07:34 basically by knowing what the computational complexity of each possible flow pattern for each command is, then trying to avoid bad complexities 15:08:08 this isn't perfect because the program might have a quadratic or even exponential blowup in data size, but it's going to do a lot better than most existing declarative languages if I ever get around to writing it 15:08:51 finding the path of least complexity sounds rather complex by itself 15:09:20 yes, but that's O(whatever) in the size of the program, not the size of the data it processes 15:09:45 this explains where the controversy over the computational complexity of regex-with-backreferences comes from 15:09:58 you can find articles online saying it's NP-complete, but I think it's in NL 15:10:03 cpressey: Hmm. "Insulting instruction in step $rs.\n" <-- do you recognize this? 15:10:15 int-e: SMETANA? 15:10:28 smetana.pl, rather 15:10:45 and the reason is that running an unknown regex-with-backreferences runs in NP time (you can encode 3SAT in it), but with any known regex, you can compile it to run in NL time with respect to the length of the string it's running on 15:10:57 cpressey: right 15:11:02 cpressey: you're good :) 15:11:24 Very few of the languages I've designed have used the term "step" 15:11:31 So that was a big hint. 15:11:47 a very long time ago, I started writing an esolang-based text adventure 15:12:04 it had a set of stairs where the steps were SMETANA commands, and swapped around as you tried to climb them 15:12:31 (but you could go up or down, making it into a puzzle) 15:13:47 ais523: regex-with-backreferences is CFL, isn't it? 15:14:31 "ais523 suddenly realises that (= and =) as operators would not be ambiguous in a C-like language" => the latter would be ambiguous in C++, where you can write (mytype::operator=) as an expression 15:14:40 cpressey: no, it's more powerful; it can solve a^n x a^n x a^n which a CFL can't 15:15:34 this feels morally equivalent to a^n b^n c^n, but regex-with-backreferences can't solve that due to the weird nature of backreferences 15:15:39 cpressey: I'm really playing with https://esolangs.org/wiki/SMETANA_To_Infinity! but I was wondering about the precise differences between that an your original :) (It turns out that the original is case sensitive, really insists on the order of statements, but is less space sensitive than S2I.) 15:16:20 "makes a guess about what order to run the commands in / basically by knowing what the computational complexity" => there are libraries where if you multiply more than two matrices, then it looks at their sizes, and multiplies them in a way that it's (hopefully) the fastest. something similar happens in SQL with complex statements, especially joins 15:16:20 . 15:16:33 hmm, mini-opinion poll: if a language is generally whitespace-sensitive and has semantically meaningful newlines, should it insist on its input file ending with a newline? 15:16:59 codegolf.stackexchange.com persuaded me to allow omission of the final newline in BuzzFizz, but I'm not sure that's correct 15:18:23 This is non-technical, but I hate it when cat-ing a text file messes up my next prompt, so I like final newlines. 15:18:25 that said, "text files must end with a newline" is an archaic rule that very few people seem to care about nowadays 15:18:35 (Yes I could use a different prompt, but that's besides the point.) 15:18:42 ais523: do you want to handle including a file into another file, or processing more than one input file (eg. given as multiple command-line arguments)? 15:19:05 fwiw, I think shells should add a new newline if the prompt wouldn't start at column 1, but you can't do that by configuring a typical shell, you'd have to patch it 15:19:29 wob_jonas: not in the case of BuzzFizz, it's a fairly constrained esolang; but I guess I'm also interested in a more general answer 15:19:37 #include normally has a newline after it anyway, though 15:19:40 if you are handling only one file, then definitely don't insist on it ending on a newline. if you handle multiple, then it's probably best to not require it, and consider file boundaries as boundary of line too, but I'm less certain and may depend on the syntax 15:20:02 Page feed is underrated, that's all I'll say 15:20:17 "should add a new newline if the prompt wouldn't start at column 1" => you sure can, you just have to put the right thing into PS1 15:20:23 cpressey: you mean formfeed? or is this a new control character I'm unaware of? 15:20:53 I'm still disappointed that people don't use nextlines as their newline character, but I can see why that happened (in most encodings, a nextline is two bytes long, which is a major drawback) 15:20:53 ais523: in practice it's probably least controversial to just treat the end-of-file as a newline unless immediately preceded by a newline. 15:21:08 #include normally has a newline after it anyway, though => the C syntax does, sure, but the TeX \includefile is weirder 15:21:19 wob_jonas: I can't think of a terminal control code that would have that effect non-interactively 15:21:25 on control characters: one time I was enamored by US, RS, GS and FS (an alternative to CSV) 15:22:07 or hmm… what about a cursor-right of the terminal width - 1, then outputting a space, then a goto-start-of-line? depending on how wrapping worked in the terminal, that might work 15:22:18 even then you need to know the terminal width to do it, though 15:22:39 arseniiv: I can see RS and FS as a CSV alternative 15:22:45 what do the other two do, though?# 15:23:04 ais523: uh, "\b\r\n" sort of, but I'm not sure it works in the first line 15:23:35 more robust would be printing as many spaces as the width of the terminal, then a "\r", but you need to know the width of the terminal for that 15:23:36 \b\r\n is equivalent to \r\n on just about everything, I think 15:24:00 just tested gnome-terminal 15:24:02 ais523: hmm, I don't remember how that worked 15:24:37 ais523: yeah, cursor-right with an arg count might work better 15:24:42 ais523: if I named them correctly, US (unit separator) should be the tightest one and FS (file separator) the least binding one; RS is record separator and GS is group separator, I thought it meant groups of records, let me look up a link… 15:24:42 no wait 15:24:46 'as many spaces as the width of the terminal, then a "\r"' does work, just tested that 15:24:47 I dunno 15:25:11 arseniiv: oh, file separator not f ield separator 15:25:15 so US and RS, then 15:25:33 arseniiv: \x1F is for the biggest blocks, \x1C is for the smallest blocks, in sequence for the two between, forget their names 15:25:40 I'm upset at people not caring about the C1 control codes 15:25:59 there's a Unicode encoding, UTF-1, that's designed to allow them all to be given literally 15:26:03 * cpressey nods gravely 15:26:05 but it's not very popular 15:26:17 ais523: wob_jonas: found it: https://en.wikipedia.org/wiki/Delimiter#ASCII_delimited_text thought wob_jonas has said it already 15:26:49 NO 15:26:55 arseniiv: no, I got them backwards 15:26:56 I'm stupid 15:27:03 it's backwards from how it should be 15:27:14 \x1C is for the largest blocks and \x1F for the smallest 15:27:25 that always annoys me so I should have remembered 15:27:30 sorry 15:27:31 I was mixing them up once I think too 15:27:57 wob_jonas: it suddenly struck me that Perl's $; is logically a unit separator character, but it's actually file separator that's used as the default value 15:28:07 I guess that makes clashes less likely, but it's also less semantically correct 15:28:37 hmm 15:29:40 also you may laugh but I once thought that UTF-1 (by some dark magic) is a 1-bit encoding 15:30:57 that doesn't seem particularly implausible? 15:31:04 UTF-8, UTF-16, UTF-32 and UTF-7 all contributed to this, yeah 15:31:20 though I think I hadn’t known the last one then 15:31:23 you'd just find some arbitrary-width numeric encoding (e.g. Fibonacci encoding), use it to encode all the codepoints, and concatenate 15:31:46 there's also a UTF-5 15:32:06 (also a UTF-6 but, confusingly, it's a 5-bit encoding) 15:32:25 and UTF-9 which was an April Fools RFC 15:33:00 The use of UTF-32 under quoted-printable is highly impractical 15:34:15 huh, UTF-1 is actually more efficient in space usage than UTF-8 up to and including U+38E2D 15:34:19 UTF-5 and UTF-6? I hadn't heared of those 15:34:44 basically because it doesn't limit continuation bytes to a particular range, they can be ASCII or extra start bytes 15:35:27 err, printable ASCII 15:36:24 control codes, both C0 and C1, are encoded as unambiguously as possible because UTF-1 was intended for use with decoders that used control code sequences to switch between encodings and/or as metadata for themselves 15:36:25 e.g. terminals 15:37:25 * ais523 suddenly realises that literally encoded C1 control codes are never prefixes of valid UTF-8 codes, so you could in theory write a terminal that supported them in all locations except mid-UTF-8-character 15:37:36 ais523: which one is the standard (perhaps ECMA) that gives the full general grammar for terminal-style escape codes? I know some of the basics, but not the full grammar for it 15:37:51 Ecma-48 15:38:29 although it's confusing to read because it gives character codes in decimal-coded-hexadecimal 15:38:59 it refers to Ecma-35 for encoding handling, though, and Ecma-35 compatibility is why things like UTF-1 were invented 15:39:11 ais523: alternately make the terminal take utf-8 encoded C1 codes (when it's generally reading utf-8 input, obviously), which also works except in the middle of utf-8 characters 15:40:00 wob_jonas: I was planning to do that anyway 15:40:09 but it's not particularly useful because C1 codes can be encoded using C0 codes 15:40:18 (for the benefit of 7-bit terminals) 15:40:23 and that only makes them a byte longer 15:40:39 this is the technique that's almost universally used nowadays to send C1 codes to terminals, as it's no longer than the UTF-8 encoding would be 15:40:52 sure 15:42:16 it's why ESC [ is so common in terminal control codes used practically, because ESC [ is the C0 encoding of the C1 control code CSI 15:42:49 yeah 15:46:59 ooh, I /finally/ understand the distinction between presentation and data cursor movement commands 15:47:08 it's to do with right-to-left languages 15:47:23 the presentation cursor movement commands move, e.g., "left" or "right" 15:47:37 the data cursor movement commands move, e.g., "forwards" or "backwards" through the text 15:47:49 so the correspondence between them is different when over LTR text and when over RTL text 15:48:52 -!- ais523 has quit (Quit: sorry for my connection). 15:49:05 -!- ais523 has joined. 15:52:29 ok, chapter 5.4 in ECMA 48 is relevant. 15:54:03 oh god 15:54:06 it's control codes hour 15:54:11 any good esolangs based on ECMA-48? 15:54:39 arguably Ecma-48 /is/ an esolang 15:54:44 apart from that, probably not 15:55:33 kmc: keyboard codes that the terminals emit for various combinations of settings and keys and modifiers 15:56:19 kmc: do you know why, in vim, if you press escape to exit insert mode then press O to open a new line in insert mode, it doesn't immediately react? 15:56:44 the combination works, but only updates the screen at the next keypress 15:57:06 this puzzled me for a while 15:57:17 afk for an horu 15:57:21 -!- wob_jonas has quit (Remote host closed the connection). 15:59:35 because keys like the arrow keys that don't correspond to ASCII often send control codes starting with ESC O, and vim is trying to disambiguate 15:59:51 libuncursed does the same thing, but with a timeout, and the timeout is very short nowadays so it's hard to notice 16:00:55 yeah 16:01:12 this is also why you can get control code junk in your irssi session if you're using mosh and it reconnects after a long drop 16:01:21 mosh will dump a bunch of control codes at irssi all at once 16:01:40 and irssi paste detection will interpret that as a literal paste 16:01:47 why it thinks i want to paste control characters, I do not know 16:02:01 maybe mosh should have an option to slow it dow 16:02:01 n 16:02:32 oh 16:02:54 (I believe this is occasionally happening with plain ssh as well) 16:03:13 on laggy connections (yes those still exist) 16:03:46 ugh, the "this is a paste" control code should have been standardised so that paste detection doesn't have to be done based on timing 16:03:57 int-e: I know, I use one 16:30:24 -!- cpressey has quit (Quit: A la prochaine.). 16:30:52 -!- tromp has joined. 16:41:13 -!- tromp has quit (Remote host closed the connection). 16:45:05 -!- tromp has joined. 16:55:00 -!- FreeFull has joined. 17:06:33 -!- tromp has quit (Remote host closed the connection). 17:14:06 -!- tromp has joined. 17:23:12 Oh, is _that_ why that happens 17:23:48 Ugh, the terminal interface sucks. 17:32:18 One big scow about mosh is how I can press some keys while not connected to the network and it buffers them forever. 17:32:29 And there's no ay to clear the buffer. 17:33:32 pikhq: yeah that's terrible 17:33:37 shachaf: yeah 17:33:54 terminals are one of those "why have we still not come up with a better way to do this" things 17:34:02 but I guess the benefits are not worth the legacy breakage 17:34:10 Terminal software is scow in the first place. 17:34:16 GUIs are TG (in theory) 17:34:17 after all, things mostly work now. it's not cutting edge 17:34:24 shachaf: I believe in text for input, GUI for output 17:34:28 in most cases 17:34:34 or hybrid 17:34:50 Well, you were talking about irssi 17:35:12 It's just ridiculous that the idea of pasting text with a newline that somehow turns into sending a message is even a thing that has to be worked around. 17:35:33 in-band signaling is certainly scow 17:35:48 yeah 17:38:16 Of course GUI software has its own issues. 17:39:49 everything is bad 17:40:51 <\oren\> I mean you could make a IRC program where you press ctrl-S to send or somehting 17:42:33 What about an IRC program where you press enter to send but when you paste text containing a newline character it doesn't trigger that? 17:42:55 Oh, another thing about terminal programs is that there are a bunch of keys you just can't detect correct. 17:43:08 <\oren\> That might be possible 17:43:23 <\oren\> On most systems in raw mode, enter is \r 17:43:52 <\oren\> so assuming pasted newline is \n 17:44:03 <\oren\> you could distinguish them 17:44:55 And if your text contains \r? 17:45:27 These are workarounds for a thing that shouldn't even need workingaround. 17:45:29 <\oren\> then you're fucked. there's no out of band signals in ssh afaict 17:46:03 <\oren\> direct keyboard acess remotely is a bad idea anyway IMO 17:46:42 -!- tromp has quit (Remote host closed the connection). 17:49:47 `ysaclist 17:49:47 ysaclist: boily shachaf 17:55:55 -!- tromp has joined. 17:59:07 -!- b_jonas has joined. 17:59:42 Ugh, the terminal interface sucks. ← it has precisely one problem, which is that the Esc key sends the Esc character code, which is a prefix of some other character codes 17:59:54 no, there are a lot of other problems with terminals 18:00:16 termios is a whole mess 18:00:17 this wouldn't be a problem if people expected to use Esc as a way of typing terminal control code sequences, but people normally think of it as a key on its own, thus an ambiguity 18:00:30 and there's ISO-2022 locking control codes, which are terrible 18:00:49 "the "this is a paste" control code should have been standardised" => it is. there are at least two different such codes standardized. 18:00:58 the great thing about standards 18:01:02 oh, I guess being able to /type/ XON/XOFF is a problem too 18:01:42 \oren\: there are plenty of signals that are /meant/ to be out of band, the issue is that you can type them anyway 18:03:39 ais523: anyway, I knew about the \eO thing, the vim behavior puzzled me because all I noticed is that sometimes the O command behaves like that, and didn't notice that it's when the previous keypress was \e 18:04:28 come to think of it, perhaps the underlying issue is UNIX's conflation of text and binary files 18:04:52 C0 and C1 codes in text files are supposed to have a specific, standardised meaning (and in theory, the text file should contain specific byte sequences to identify itself as using them) 18:05:10 whereas in a binary file, bytes with bits 5, 6, and 7 clear could mean anything 18:05:34 no! I like all those things. and I like that control-M and enter type the same thing and I don't have to teach each terminal program individually that they are the same, and if I didn't like it, then I'd chnage the bindings of the terminal 18:06:06 and I like being able to use the same programs for all sorts of files 18:07:00 kmc: Are you still using Microsoft® Windows®? 18:07:50 I reinstalled Windows 7 this week. 18:08:19 (Hmm, is that a smart thing to say on a publicly logged channel...) 18:09:21 int-e: not particularly dumb, the windows-specific malware can tell it directly anyway, it needn't look on irc for that info 18:09:25 "If Windows 10 has taught us one thing, it is that we hate updates." 18:10:03 shachaf: yes 18:10:08 ™ 18:10:35 The real reason is that it's the last cloud-free Windows. And it's just for games. 18:13:26 -!- tromp has quit (Remote host closed the connection). 18:22:06 -!- tromp has joined. 18:30:46 I boot to Windows once every few months which means does a trillion updates each time. 18:30:51 -!- Hooloovo0 has quit (Remote host closed the connection). 18:31:10 I don't really understand why it takes so long to update. 18:38:49 -!- Hooloovo0 has joined. 18:53:38 shachaf: IIRC at least some versions of the Windows update algorithm are not O(n) 18:53:44 but Microsoft didn't notice for ages because the constant factor was small 18:54:11 -!- ais523 has quit (Quit: quit). 19:19:45 -!- tromp has quit (Remote host closed the connection). 19:25:39 [ 4^6 19:25:40 b_jonas: 4096 19:48:06 On ifMUD there is a @paste command in case you are making a multi line paste. (Another alternative would be to use other software with xclip to add a prefix to each line.) 19:49:49 (However, there is then @endpaste and @quit both of which override the paste mode.) 19:50:11 . 19:51:49 . 19:52:45 -!- tromp has joined. 19:55:25 whats up 19:57:29 -!- tromp has quit (Ping timeout: 252 seconds). 20:01:00 -!- tromp has joined. 20:01:32 "it has precisely one problem, which is that the Esc key sends the Esc character code, which is a prefix of some other character codes" I also think that is what is sometimes the problem, and would not be the problem if people expected to use Esc as a way of typing terminal control code sequences. 20:01:59 but nobody ever would do that 20:02:15 (One thing to do would be for escape to have a longer code when application keyboard mode is enabled.) 20:03:25 The problem is the meaning of [ in Vim, I think. 20:04:12 kmc: zzo38 would! 20:04:28 yes 20:04:33 "but nobody ever would do that" is on the list of famous last words. 20:05:23 true 20:05:59 If you're proposing alternate designs for terminal interaction you've already lost. 20:06:26 The main reason to use terminals that work the way they do is compatibility with 1970. 20:06:32 I think it is fine how the escape works; rather, some programs try to do it something else, that is a problem. 20:08:21 The command [D list all defines, but instead if it could mean, move cursor left and reenter previous mode (if escape is pushed in command mode then it will set command mode as the previous mode and remain in command mode), then maybe it will work. 20:24:51 zzo38: I don't think so. IMO for vi, the main problem is what the escape key does in insert mode. I generally use control-C to exit insert mode, which is not quite equivalent but mostly is. you could have control-C be completely equivalent (and some other key do what control-C does now) and not use escape at all. 20:25:01 then escape would be used only to introduce control sequences. 20:26:59 Yes, that would be another way to fix it. 20:29:11 -!- Lord_of_Life_ has joined. 20:32:04 -!- Lord_of_Life has quit (Ping timeout: 248 seconds). 20:32:10 -!- Lord_of_Life_ has changed nick to Lord_of_Life. 20:36:00 Guess what, I just pasted a command into my shell and it had a newline at the end so it got run! 20:36:20 That was certainly my intended behavior, and not a misfeature or bad UI. 20:41:58 -!- arseniiv has quit (Ping timeout: 272 seconds). 20:43:26 Man, I tried to compile a Rust program and it downloaded over 100 dependencies. 20:49:17 Also the directory size is 1.8G 20:53:39 I've been writing Go for fun lately. It's got modules now. 20:54:00 Godules. 21:02:32 -!- tromp has quit (Remote host closed the connection). 21:02:42 Oh man, I said "orientation" instead of "direction". 21:02:50 v. embarrassing 21:04:30 -!- atslash has quit (Quit: This computer has gone to sleep). 21:13:03 -!- ais523 has joined. 21:27:51 I think I have a SMETANA to Infinity! pre-quine... (pre-quine = program that generates a quine as its output) 21:28:48 So all quines are pre-quines? Why are pre-quines interesting? 21:29:02 int-e: scary 21:29:03 Because you can be somewhat sloppy in generating them. 21:29:27 smetana to infinity sounds like a hard language to make a quine in 21:29:40 shachaf: In this case, the final quine will have 5-digit labels (so starting from 00001) but the pre-quine doesn't, it starts at 1. 21:29:43 I mean there probably exists a quine, but to actually construct one is hard 21:30:45 int-e: about how long is it? 21:31:04 Did you ever play Zork: Grand Inquisitor? 21:31:21 b_jonas: 1.3MB for the pre-quine... it's still computing the final one. It's... slow. 21:31:53 b_jonas: There's lots of room for improvement. 21:31:56 sure 21:32:11 tell us when you've verified it (by running twice and comparing) 21:34:02 -!- tromp has joined. 21:34:33 b_jonas: I wonder if anyone ever managed to say a fungot quine 21:34:34 b_jonas: hello ski :) ( actually i don't think you could like rephrase it? do you normally see? 21:34:52 one that doesn't use hat commands that is 21:35:01 ^help 21:35:02 ^ ; ^def ; ^show [command]; lang=bf/ul, code=text/str:N; ^str 0-9 get/set/add [text]; ^style [style]; ^bool 21:35:31 because it would be easier to write one with ^ul 21:36:31 https://esolangs.org/wiki/Underload#Quine lists some quines that give a starting point 21:36:40 slow: It'll execute 2,460,020,224 swaps to process 35072 bits of data (twice). 21:37:30 Judging by the progress so far it'll take an hour or two, and then about the same time again to verify. Fun! 21:38:46 -!- tromp has quit (Ping timeout: 276 seconds). 21:45:34 Underload is normally my goto language for botquines, if they support it 21:45:35 -!- xkapastel has quit (Quit: Connection closed for inactivity). 21:45:44 ^ul ((^ul )SaS(:^)S):^ 21:45:44 ^ul ((^ul )SaS(:^)S):^ 21:45:59 because it's really easy to write quines in it 21:57:04 -!- ais523 has quit (Ping timeout: 244 seconds). 21:59:44 fungot: Please repeat this sentence, including the prefix "fungot:". 21:59:45 fizzie: you could look into chicken. csc gets a lot randomer soon. away for a while google had a paper on an experimental sun pipeline i was looking fro a short cut. 21:59:47 Aw. 22:00:52 fizzie: Please repeat this sentence, including the prefix "fizzie:". 22:01:12 fizzie: Please repeat this sentence, including the prefix "fizzie:". 22:01:29 fizzie: Please repeat this sentence, including the prefix "fizzie:". 22:04:43 -!- AnotherTest has quit (Ping timeout: 252 seconds). 22:24:00 I suppose all quines would be pre-quines, but, not all pre-quines are quines. Is that it? 22:25:40 what's a pre-quine 22:26:53 zzo38: That's what I suppose also. 22:27:09 A pre-quine is apparently a program that outputs a quine. 22:28:22 i see 22:28:29 well then a quine is certainly a pre-quine 22:28:42 -!- tromp has joined. 22:28:45 and you can make a quine into a not-quine pre-quine by prepending a nop or something 22:28:49 so yes what zzo38 said is true 22:29:04 are there interesting things to be said about pre-quines? 22:29:46 I don't know. My guess is that it might depend on the programming language in use, but generally not. 22:32:57 -!- tromp has quit (Ping timeout: 252 seconds). 22:33:37 Are there interesting esolangs where you don't have the property that you can easily make a program longer/different with something like a nop? 22:35:30 I don't know. 23:07:57 b_jonas: 101 minutes. I think I'll use an alternate method of confirmation (namely, drop all those extra 0s that the quine generator put in and compare... which looks fine!) 23:09:11 http://int-e.eu/~bf3/tmp/quine.s2i :) 23:10:52 I have a good programming language. It's called md5sum. 23:10:56 You should write a quine in it. 23:11:44 > exp (-1) 23:11:46 0.36787944117144233 23:12:01 shachaf: a priory that's the chance that such a quine even exists. 23:16:28 shachaf: Oh for a fixed file name. If you can vary the file name the chances get better :) 23:16:46 I was just typing that. 23:16:48 -!- adu has quit (Quit: adu). 23:20:04 I thought by taking from standard input?