00:00:17 actually FORTE lacks the register part, i guess. 00:00:23 oerjan: C-INTERCAL requires a command line option, but lets you assign directly with the option 00:00:34 CLC-INTERCAL doesn't require the option but you can't just say DO #1 <- .1 00:00:51 you need to sneak the assignment in indirectly, buried inside overloads 00:01:02 oerjan: Wouldn't it be assigning a... yeah, you got it right 00:01:58 ais523: Wait, why were you recognized for INTERCAL? (I admit I had to check the wiki to see if you invented it and it somehow never occured to me xD) 00:02:01 although it really depends on whether you're assigning the register to the constant, or the value of theregister to the constant 00:02:11 hppavilion[1]: I maintain the most popular implementation 00:02:21 ais523: Ah, makes sense 00:02:33 although I didn't invent the language itself, I did invent many features that modern implementations have 00:02:43 oerjan: Now consider <-1, v> 00:02:47 although most come originally from CLC-INTERCAL, which is more experimental 00:03:02 You need to allow registers to be sets for that to work, IIAC 00:03:24 hppavilion[1]: nah, that's like writing *(&(&x)) = y; in C 00:03:49 ais523: Maybe it's that I was thinking of 00:03:52 Yeah, that's it 00:03:52 so long as some memory location happens to be holding the value of &x, then &(&x) is perhaps not impossible to define 00:03:53 -!- mauris_ has joined. 00:04:11 hppavilion[1]: ooh 00:04:18 -!- mauris has quit (Ping timeout: 252 seconds). 00:04:32 (x, y) is, if I am correct, assigning the register referenced by the x chain of length r to all registers that reference it directly 00:04:33 imo hppavilion[1] starts a lot of projects and finishes few 00:04:46 mauris_: Yeah, that's correct 00:04:52 mauris_: hppavilion[1] is more of an ideas person 00:05:00 I'm hoping that the ideas will become higher quality over time 00:05:06 mauris_: I know starting a project I'll probably never finish it, but it's fun while it lasts 00:05:36 <-1, v> reminds me of threaded intercal somehow 00:05:50 except that's about control rather than data flow 00:06:26 Wait, means backwards, and sets the register under the r-chain to the value directly referenced by -1... I think might just be setting the register at the end of the chain to its own address 00:06:56 No, wait, 0 is an immediate value... yeah, I think that's right. But it probably isn't, knowing me. 00:07:13 -!- mauris has joined. 00:07:37 ais523: Is that right? 00:07:47 However, is equivalent to what I mentioned earlier 00:07:47 hppavilion[1]: hm <1, -1> would be setting a register to several potential values. maybe that could be forking like threaded intercal 00:07:59 oerjan: Yeah, that sounds good 00:08:13 oerjan: Though I was thinking of treating the register as a set instead 00:08:22 hm 00:08:28 oerjan: I'm trying to keep this mathematically rigorous, at least a little bit 00:08:29 oerjan: it's more like quantum intercal (which isn't like quantum computing, but fits your description quite well) 00:08:50 -!- mauris_ has quit (Ping timeout: 245 seconds). 00:09:14 Of course, neither one is a good idea in the long run, given that you can't do either sets OR forking like that on most real machines 00:09:27 you can, it's just slow 00:09:35 ais523: Well, yes 00:09:50 ais523: And it requires elaborate tricks with the memory to do it 00:10:06 For the value of "elaborate tricks" containing "linked lists" as an element 00:10:21 oerjan: and . Consider that for a moment. 00:11:16 (Of course, you probably need a complex memory space of complex numbers for that, but then it's just trivial) 00:11:23 hppavilion[1]: well if the i's have the same parameter then that _might_ copy a value two times if you're lucky. 00:11:35 oh wait 00:11:38 you meant that i 00:11:39 oerjan: wut. 00:11:46 Yes /i/ did 00:12:17 (Buddha tisk) 00:12:22 no idea what that would mean since multiplication of the depths isn't a well-defined thing even with integers. 00:12:23 -!- mauris_ has joined. 00:12:37 oerjan: Multiplication of the depths? 00:12:41 Oh, right 00:12:53 Of the r- and v-chain lengths 00:13:01 hppavilion[1]: for i to make sense you'd need i*i = -1 to mean something meaningful. 00:13:09 oerjan: Yeah, I know 00:13:12 -!- Lord_of_Life has quit (Excess Flood). 00:13:28 oerjan: Really, I just like shoving complex numbers where they shouldn't go 00:13:34 OKAY 00:13:38 -!- mauris__ has joined. 00:14:02 oerjan: But /maybe/ we can define where r and v are real numbers 00:14:13 SKEPTICAL 00:14:21 start with 1/2, i guess. 00:14:23 oerjan: I agree, but it might be possible 00:14:32 -!- mauris has quit (Ping timeout: 256 seconds). 00:14:33 oerjan: 1/2 in which place? 00:14:36 Oh, you mean for each one 00:14:40 Hm... 00:14:43 anywhere. 00:14:55 <1, 0> is SET, <1, 1> is MOV, so what's <1, 0.5>? 00:15:06 -!- Lord_of_Life has joined. 00:15:09 I will now go on a spirit walk to figure it out 00:16:03 maybe if you're very lucky there's some formula that gives the depth n reference and which somehow makes sense for non-integers 00:16:10 clearly whatever operation half-dereferences an address, doing it twice fully dereferences the address 00:16:13 oerjan: Perhaps it's some sort of weighted operation? Where 0 clobbers, 1 follows, etc.? 00:16:19 like the gamma function generalizes factorial 00:16:29 ais523: Ah, yes 00:16:39 I think the problem is that dereference isn't continuous or even monotonic 00:16:48 -!- mauris_ has quit (Ping timeout: 256 seconds). 00:16:55 thus you wouldn't expect its iteration to be defined for non-integers 00:17:02 there's also a formula that allows non-integral time integration/differentiation that way 00:17:16 Yeah, I don't think it works 00:17:20 at least for nice enough functions 00:17:30 -!- mauris has joined. 00:17:32 Back to the usage of multiplication in chains 00:17:44 (sc?hwart?z functions, or the like) 00:18:14 -!- mauris__ has quit (Ping timeout: 252 seconds). 00:18:28 (basically, fourier transformation changes diffentiation into multiplication by a function) 00:18:50 if you do define this operation, we can have Two And A Half Star Programmer :-) 00:18:59 ais523: xD 00:19:35 three star programmer is basically <3,++> 00:19:43 i.e. it's a rmw rather than just a copy 00:20:44 <\oren\> speaking of not finishing things, I should work on that text editor I always said I'd make 00:21:09 * oerjan has a hunch that _if_ you found a nice formula that calculates depth-n reference on a set of registers, then non-integer depths might not be in the set 00:21:50 i.e. if you try to repeat reference 1/2 times on something involving registers {0,...,n}, it might well answer register 1/2 or something. 00:21:53 well, let's think about it this way 00:22:08 dereference is basically evaluating an arbitrary function, because you can put /anything/ in the registers 00:22:37 thus, this means that for any function f, we need to be able to find a function g such that for all arguments x, g(g(x)) = f(x) 00:22:39 (inspired by fourier transforms, i think the registers should be arranged as elements on a cyclic group) 00:22:39 So for <1, 0.5> I basically need something halfway between pythons `regs[x] = y` and `regs[x] = regs[y]` 00:23:00 ais523: What's the ++ in <3, ++>?? 00:23:05 or as e^(2pi*i*k/n) 00:23:17 hppavilion[1]: increment; you're reading the value in the register, incrementing it, storing it back 00:23:33 ais523: Ah? 00:23:35 OK 00:23:54 ais523: That's not covered by the <> notation; I haven't gotten to arithmetic yet (I'm doing conditionals next) 00:24:51 Oh, and if anybody here ever uses this seriously, remember that angled brackets are preferred when possible over <> in the notation xD 00:26:00 demnod brackets 00:26:45 oerjan: I suppose perhaps we should do square roots instead of normal fractions and work up from there 00:26:54 For example, what's ? 00:27:50 0 obviously 00:28:02 Phantom_Hoover: How? 00:28:13 Phantom_Hoover: That makes 0 sense 00:28:18 i'm facetiously assuming that's an inner product 00:28:19 (PI) 00:28:20 Ah 00:28:25 OK xD 00:28:29 hppavilion[1]: that isn't easier, unless you're taking the square root of a square number 00:28:36 I have no clue what an inner product is, so yeah 00:28:46 ais523: Unless we make a decision about what it should do 00:28:54 it's the generalisation of the dot product 00:29:09 our company needs to reevaluate our inner product strategy 00:29:09 also a) I've never seen that notation for inner products before, b) inner product on real/complex numbers is just normal multiplication 00:29:14 Probably, calling (x, y) twice should be equivalent to <2, 0>(x, f(y)) 00:29:26 ais523, ...you've never seen angle brackets for inner product? 00:29:30 Wait, but you clobber one of the registries in the process xD 00:29:39 no, I'm more used to writing it with a dot 00:29:43 like with dot products 00:29:55 dot product has different implications though 00:29:57 Except no you don't 00:29:58 Hm... 00:30:39 angle brackets and comma, to me, are tuple notation 00:30:56 ais523: i think on complex numbers you should conjugate one argument hth 00:31:00 Well clearly, <1, v>(x, y) twice is just <1, v>(x, y) once, IIRC 00:31:11 oerjan: oh right, that rings a bell now you've mentioned it 00:31:42 my hunch is that dot products should be positive definite so you can orthonormalise 00:31:50 Phantom_Hoover: now I'm having an hppavilion[1]-like idea of "what if, from an inner product space's inner product, you could extract either of the original arguments by reversing it somehow?" 00:32:17 but weirdly WP doesn't mention positive-definiteness as a prerequisite for gram-schmidt 00:33:04 * oerjan has never picked up any difference in meaning between dot and inner product 00:33:17 ais523, inner products are bilinear, i.e. linear maps from the tensor product to the underlying field, so on any space with dimension greater than 1 they'll destroy data irreversibly 00:33:27 OK, should repetition of an operation be multiplication or addition of those operations? I'd say multiplication, because <1, v>(x, y) twice is the same as once 00:33:47 Even if x=y 00:34:00 Wait... 00:34:03 No, it isn't 00:34:07 Is it? 00:34:15 Hm... 00:34:37 I think they aren't the same. 00:34:57 although i have seen both dot, ( , ), < , > (i think) and of course the physicists' < | > 00:35:12 oerjan: OH MY GOD IT'S BRA-KET 00:35:17 WHY IS IT FOLLOWING ME 00:35:24 WHAT DOES IT WANT WITH ME 00:36:07 hppavilion[1]: it wants to bra-ek you hth 00:37:36 hppavilion[1]: incidentally, my computer tried to prevent me from sending that < | > line by disconnecting me at the precise moment i pressed return hth 00:38:23 bra-ket's some hybrid thing where maybe i should have heeded the warning (an all too common thought) 00:38:39 Phantom_Hoover: yes 00:44:50 <\oren\> nah nah, it's simple: |A> is a column vector, a row vector is a dual vector 00:45:24 (assuming finite dimensions) 00:46:29 <\oren\> and is = |A> . |B> 00:46:53 and |A> if A is unital 00:48:58 <\oren\> I interpret it as |A> being A inside an arrow instead of the arrow on top 00:50:40 oerjan, that's a neat trick 00:51:32 |A> is useless syntactic sugar, more or less 00:51:35 -!- Reece has joined. 00:51:39 hold on what? what kind of projection? 00:59:35 https://en.wikipedia.org/wiki/Projection_(linear_algebra) 00:59:36 <\oren\> i guess 00:59:44 yes, exactly 01:00:22 <\oren\> Oh, so it's projection of any dimensional space onto a line 01:02:08 <\oren\> the projection I was thinking of was I-|A> Phantom_Hoover: it's a bit like leibniz integration notation - it's just syntactic sugar but the intuitive rules it implies just work 01:05:12 -!- XorSwap has joined. 01:10:08 oerjan, well not entirely, unlike leibniz notation you can easily make it rigorous 01:10:48 but when you do so you end up making |A> the exact same thing as A 01:11:19 01:19:06 -!- jaboja has quit (Ping timeout: 252 seconds). 01:38:42 From AnnieFlow on the wiki: Any object that is like a stack (queues, sets, etc.) can take the place of any stack in the program 01:41:03 presumably those are meant to be variants of the language 01:42:51 ais523: Yes, what I have a problem is with is "Any object that is like a stack (queues, sets, etc.)" 01:42:53 "sets" 01:43:01 -!- mihow has quit (Quit: mihow). 01:43:04 "sets [are like stacks]" 01:43:12 HOW ARE SETS LIKE STACKS 01:43:18 INTERROBANG 01:43:29 you can insert elements into them and remove elements from them 01:43:32 that's like a push and a pop 01:43:41 you can think of a set as being an unordered queue that removes duplicates 01:43:57 (note that the version with sets is sub-TC as it doesn't have infinite memory, due to the duplicate removal) 01:44:15 bags 01:46:54 the version with bags is /probably/ TC? I'm not sure though 01:47:02 it's even worse at flow control than fractran 01:48:04 ais523: But there isn't an operation nateomorphic to pop- no method that extracts an element from it and returns it then changes what the next element removed will be 01:48:28 hppavilion[1]: "remove an element at random" 01:48:32 ais523: Perhaps 01:48:58 oerjan: now I'm really interested as to whether BagFlow is TC 01:49:17 /especially/ because it manages to be a weird Minsky machine variant and I've made a lot of those recently 01:50:45 it's basically TAFM level 1, except that decrements sometimes fail at random 01:50:55 err, not level 1 01:50:56 level 2 01:51:08 TAFM level 2 except that decrements are sometimes critical at random 01:51:50 oh, and incrementing is free, you don't need to do stupid control shenanigans 01:51:55 that makes things easier 01:52:49 actually, let's consider the more general question: is a full-powered Minksy machine where decrements sometimes fail at random TC-probability-1? 01:53:27 you can't obviously use the normal modular arithmetic tricks with this because you can't guarantee that the counter is actually zero, unless there's some trick I haven't realised 01:55:05 -!- ^v has joined. 01:56:58 -!- mihow has joined. 01:57:37 How should my ELK runtime go about doing GUI? 01:57:51 In a serious ay 01:57:52 *way 01:59:20 hmm, what about the following BF derivative (which can be implemented in BagFlow)?: BF but all loops must be balanced, cells are unbounded both negative and positive, and a loop has a 1/(n+1) chance of terminating (where n is the value of the tested cell) 01:59:43 hppavilion[1]: normally a VM is not responsible for GUI itself 01:59:49 ais523: Ah 02:06:10 -!- hppavilion[1] has quit (Ping timeout: 256 seconds). 02:09:16 -!- Treio has quit (Quit: Leaving). 02:12:58 -!- hppavilion[1] has joined. 02:13:08 ais523: What is responsible then? 02:13:29 libraries, normally 02:13:34 ais523: OK... 02:13:48 ais523: And how does it work, precisely? For a VM like the CLR? 02:13:50 the VM will often have a "run native code" instruction to let the libraries inside the VM call functions in the libraries outside the VM 02:14:01 with both parts involved 02:14:10 ais523: Ah? 02:14:12 OK 02:14:31 -!- mihow has quit (Quit: mihow). 02:15:01 ais523: And if I wanted to make the VM do GUI, for the sake of ease and cross platformness and esoterocity? 02:15:10 (esoterotic?) 02:15:38 (Relevant: https://xkcd.com/915/) 02:15:41 hppavilion[1]: then you'd have a syscall instruction 02:15:47 ais523: OK... 02:15:50 that the code inside the VM could use to get the VM itself to do its GUI stuff 02:16:19 ais523: To be clear, this is a VM like the CLR for .NET or the JVM. It's a bytecode. 02:16:31 yes, I know what a bytecode VM is 02:16:35 you mentioned the CLR already 02:17:02 I know some things about some VMs but I don't know CLR/.NET/JVM much. I am familiar with Z-machine, and with "Famicom VM" (which originally was not a VM) 02:17:03 ais523: I know you know 02:17:30 ais523: I did? OK. 02:17:53 zzo38: is the Famicom VM the instruction set that Famicom emulators run? 02:18:00 Z-machine has one unusual feature where the stack is not part of RAM but general-purpose registers are. 02:18:20 ais523: Yes, although I am talking about an idealization 02:18:23 zzo38: that's not that unusual, the PIC microprocessor architecture works like that too 02:18:40 actually I think just about the only things that aren't memory-mapped are the stack and the program that's running 02:19:25 ais523: Pfft. You should totally memory-map the program. 02:19:46 hppavilion[1]: in the PIC microprocessor architecture, the program actually has a different byte size from RAM 02:19:52 it's 14 bits to the byte 02:20:19 ais523: That is blasphemy 02:20:22 With Z-machine the program is memory-mapped, although most of it is inaccessible (only the first 64K is accessible for general-purpose access, the rest can store only packed strings and Z-code instructions and is read-only) 02:20:26 there is a system call on some of the more powerful models that lets you copy from the program to RAM, though 02:20:46 and some of them even go the other way, letting you copy from RAM to program, but that's very slow as it has to reprogram its internal EEPROM to do so 02:20:50 14-bits-to-the-byte is an abomination 02:20:52 (Packed strings and Z-code instructions can exist within the first 64K too though, and may even be writable) 02:21:13 I'm not quite sure what this feature is for, but Microchip seem to have a philosophy of introducing random features in case they're useful 02:21:34 and also documenting what happens in situations most people would expect to be UB, just in case that comes in useful to people some day too 02:23:24 " 02:23:24 If 14 bits lie with bytekind, as they lieth with a processor, both of them have committed an abomination: they shall surely be sent to /dev/null; their blood shall be upon them." -- Linusveticus 20:13 02:24:18 wow, you're taking this really personally :-( 02:25:12 I have implemented Z-machine in C and in JavaScript, and partially in 6502 assembly code, so far. (Although I now believe I have designed the API for the JavaScript Z-machine badly, since I now have better ideas about how to do it) 02:25:23 ais523: I do not like 14-bits-to-the-byte 02:25:39 16 would be acceptable, but still incur my scorn because 16 /= 8 02:25:54 well, the instruction set presumably didn't need any other number of bytes 02:26:50 ais523: If you don't want to use the full 16, scale it down to 8, or do something else with the design. 02:26:53 I have designed instruction sets where the number of bits in one byte is 16 or 32, and even 7 once, as well as ones with different program/data memory 02:27:00 Maybe some UTF-8 like bullshit, but on nybbles 02:27:08 *any other number of bis 02:27:15 hppavilion[1]: now you're just wasting a bunch of memory for no reason 02:27:28 But don't try to have a byte s.t. len(byte) /in {2**x : x in N} 02:29:05 I can't think of a technical reason for that 02:29:18 fwiw lots of different byte sizes were tried in the earlier history of computing 02:29:24 ais523: No, but there's a moral reason 02:29:28 settling on octets only happened in the last few decades 02:29:47 ais523: Uhm, the early history of computing was the 30s and 40s with Turing. 02:29:53 hth 02:30:04 unlike the number of bytes in your larger units, which does often have a reason to be a power of 2, there's no technical reason I can think of for the number of bits in a byte to be a power of 2 02:30:08 hppavilion[1]: I said "earlier" 02:30:41 ais523: It makes programmers more comfortable, and you don't have technical stuff without programmers. 02:30:43 There. 02:30:49 come to think of it, it took a while for electronic computers to outcompete mechanical and (later) for digital computers to outcompete analog 02:31:17 digital computers were around early but from what I've managed to make out from old computer books, analog computers were more common for many years 02:31:24 * ais523 wonders if analog computers are used nowadays 02:31:52 I'm not all that old, and when I was young, people often used to explicitly say "digital computer" to disambiguate 02:31:55 nowadays nobody bothers 02:31:58 What is your opinion of JSZM? My own opinion is that the API could be improved and that it is a bit messy as is. Currently the "run" method is a generator function that yields stuff directly, and the methods defined by the front-end are ordinary functions. I think better would be, the "run" method never yields stuff directly but instead calls the front-end functions by "yield*" and they may then yield stuff. 02:32:20 ais523: "And the computre dost have a half score and four bits to every pyce of the meal" 02:32:24 -- An old computer book 02:32:52 zzo38: I don't have any opinions about specific z-machine implementations, having not looked into any of them in details 02:32:58 *in detail 02:33:44 -!- XorSwap has quit (Ping timeout: 276 seconds). 02:34:38 -!- XorSwap has joined. 02:34:41 -!- XorSwap has quit (Client Quit). 02:35:02 OK, but what about this API design? Do you know JavaScript programming? (This API design isn't really specific to the internals of Z-machine) 02:35:33 I don't know that much JavaScript programming 02:35:40 I can write programs in it but don't use all its features 02:35:46 -!- ^v has quit (Ping timeout: 245 seconds). 02:38:02 Have you used any ES6 features? JSZM is using many ES6 features. They still didn't add macros and "goto" in ES6 though. 02:40:40 -!- Phantom_Hoover has quit (Read error: Connection reset by peer). 02:41:22 zzo38: You're kidding about goto, right? 02:41:39 hppavilion[1]: No 02:41:45 zzo38: ... 02:42:00 (My Z-machine implementation in C is called ZORKMID ("Zork Machine Interpreter and Debugger"), and I have found it to be very useful when debugging other implementations!) 02:42:26 are there any z-machine impls in esolangs? 02:42:29 "If zzo38 lie with gotokind, as they lieth with a FOR loop, both of them have committed an abomination: they shall surely be sent to /dev/null; their blood shall be upon them." -- Linusveticus 20:13 02:42:30 also, is the z-machine TC? 02:43:19 ais523: The only unbounded memory it has is the stack, so I don't expect so. 02:43:33 probably a PDA then 02:44:07 You know what I'd LOVE to see? 02:44:26 (And the actual limit of the stack in implementations usually isn't extremely large anyways, although the specification doesn't seem to preclude an unbounded stack.) 02:44:42 A company manufacture cheap computers reminiscent of old computers (like the PDP) so that we can get the retro experience of how computers worked "back in the day" 02:44:48 Unfortunately, I now need to eat 02:44:49 Bai 02:45:28 (ZORKMID also reveals how unoptimized Infocom's story files are. I can think of a large number of ways to optimize their codes, which they did not do.) 02:48:35 zzo38: I assume they had no reason to optimize them because it would have taken developer time (therefore costing money) and the game ran fast enough anyway 02:49:59 -!- hppavilion[1] has quit (Ping timeout: 276 seconds). 02:50:36 My optimizations would likely to improve both speed and size. 02:51:31 how were the games distributed? 02:51:56 Usually on floppy disks together with the interpreter, I think 02:52:39 so in terms of size, if the game fits onto the floppy disk, there's no cost savings in a smaller size unless you can save enough size to use a less capacious and thus cheaper design of floppy disk 02:53:11 They could have done that though, some computers floppy disk have less capacity than others 02:53:43 Also since they cannot fit the entire story file in RAM at once, the non-preloaded-area had to be swapped, by reloading parts from the disk when needed. 02:57:18 It is possible that they did not know an algorithm for encoding text with permanent shifts, so they only used temporary shifts; the algorithm is now known although it is slower than O(n) 02:58:41 One example of instruction coding is at address 29424 of Zork I they have the instruction "SET 31 -1" which encodes as five bytes (CD 4F 1F FF FF). It could be shortened to three bytes by encoding it as the BCOM instruction instead (probably also faster because the instruction decoding is simpler in such case). 03:03:43 The copyright notice could save fifteen bytes if permanent shifts were used 03:04:44 The other thing to do for optimization is to decide what strings to place into the "frequent words" table; however I do not know a suitable algorithm for doing this optimally. 03:13:18 Other possible optimizations include "frequent values optimization", "overlapping strings", "shared property tables", "truncated default properties table", "dynamic fwords", "BCOM immediate", "NEXT slot abuse", "gap filling", etc 03:27:24 -!- mauris has quit (Quit: Leaving). 03:34:37 ooh SineBot showed up in the wp page i'm following 03:36:24 Which is what page? 03:36:38 Talk:Planet Nine 03:37:35 i started following when the article was on the main page and thought it should have cooled down by now, but new issues keep coming up. 03:39:01 the latest being raised by one of the original researchers, who is very new to wikipedia, thus the missing signatures 03:39:42 i just remembered ais523 said he didn't think it was active, and i've seen so many missing signatures lately... 03:40:22 oerjan: I remember 0.999... 03:40:23 . o O ( how can you see them when they're missing ) 03:40:47 that article was a mess even before it made the main page, with so many people not believing it 03:40:49 and only got worse afterwards 03:40:53 heh 03:41:25 well i thought this one was getting pretty neat until the expert showed up to tell everyone they'd misunderstood stuff 03:41:47 err, mess wrt its talkpage 03:41:54 the nightmare is mostly kept off the article itself 03:41:57 ah 03:42:00 * ais523 wonders if it's ended up as PC1 yet 03:42:05 oerjan: Am I that pedantic? 03:42:09 what's PC1 03:42:34 it's a newish protection level, it means that anyone can edit it but changes by anonymous users have to be reviewed before they go live 03:42:43 i,i it takes one to simulate one 03:43:00 there are quite a lot of reviewers, it's a relatively easy user rank to get 03:43:07 mostly it's intended to stop libel creeping into articles about people 03:43:24 shachaf: no, you're that cheeky hth 03:43:40 * oerjan doesn't think cheeky is quite the right word but cannot remember what is 03:43:48 hmm, looks like it was PC1 from feb 2014 to oct 2015 but the furore died down enough to be able to turn it off 03:45:01 the existence of PC1 is no doubt going to confuse people further about how Wikipedia works 03:45:08 many people assume all pages work like that 03:45:18 ha 03:46:00 well Planet Nine is currently semi-protected, anyway 03:46:57 no one paid attention to my suggestion it could be dropped when it went off the main page. but then that was about the time someone realized an academic spammer was editing it 03:47:08 hmm, the FAQ on Talk:0.999... is hilarious 03:48:17 that FAQ looks rather subtly hidden... 03:48:29 also, https://en.wikipedia.org/wiki/Talk:0.999.../Arguments demonstrates the answer to a longstanding philosophical problem at Wikipedia: where do you place metadiscussion about a talk page, given that it doesn't have a talk page of its own? 03:48:38 (the answer is apparently on the page itself) 03:50:02 ais523: I am quite amused by how much of a talk page that needs. 03:50:36 pikhq_: I was watching while that article was TFA 03:50:43 -!- hppavilion[1] has joined. 03:50:51 it's one of the most contentious TFAs ever, for no obvious reason 03:52:20 ais523: What is? 03:52:26 That would be easy to predict. 03:52:27 Something to do with being something that just about anyone with a minor amount of mathematical exposure can *think* they understand well enough to say something stupid, I think. 03:52:43 hppavilion[1]: 0.999... 03:52:58 hppavilion[1]: the page about what happens if you have a 0, a decimal point, and an infinite number of 9s 03:54:11 I don't like that FAQ because it doesn't get to the heart of the issue, which is definitions. 03:54:16 Currently, the ELK runtime- which is, I think, a RISC- has 0x2D instructions 03:54:24 Two people arguing about things without ever saying what they mean isn't very useful. 03:54:53 ais523: What does TFA stand for? And the page where? 03:55:01 Which is of course 1 because $$\sum_{\x=1}^{\infty} 9 \over {10 ^ x} = 1$$. But, y'know. Math. 03:55:07 hppavilion[1]: Today's Featured Article, Wikipedia 03:55:25 hppavilion[1]: Today's Featured Article, and https://en.wikipedia.org/wiki/Talk:0.999... 03:55:31 I think if you asked the typical person who disagrees that 1 = 0.999... about the limit of 0.9, 0.99, 0.999, ..., they'll grant that that's 1. 03:55:46 They just don't like defining 0.999... as that limit. 03:55:55 the typical person who disagrees doesn't know what a limit is, I suspect 03:56:01 Which is fine, it's a matter of intuition or taste or something, not something you can really argue about. 03:56:01 shachaf: The typical person who disagrees that 1 = 0.999... doesn't grok limits. 03:56:20 Well, they'll agree that that sequence approaches 1, or whatever. 03:57:06 It's also a pretty basic result of what the notation means. It *is* $$\sum_{\x=1}^{\infty} 9 \over {10 ^ x}$$. 03:57:15 Notation means whatever you want it to. 03:57:37 Yeah well https://xkcd.com/169/ 03:58:11 You can say one meaning makes for a more elegant system than another, and that's a reasonable argument, but it's silly to argue that one notation is more right than another. 03:58:29 pikhq_: What? That's not the same thing at all. 03:58:59 My argument for why this is what the notation means is because *that's the consensus for what it fucking means*. 03:59:03 It depends on if you're a) Using the surreals and 2) defining 0.99999 as 1-ε, which is a stupid thing to do 03:59:28 OK, that's fine. 03:59:31 Because how do your write e.g. 1-2ε 03:59:46 You can say "+" is multiplication, but if you just randomly say "1+9 = 9" people are going to think you're talking nonsense. 04:00:00 Obviously, the answer is 0.999...8, but that's stupid because you can't generalize it to all surreal numbers 04:00:01 what's nextafter(-1.) in an implementation where floats are infinitely accurate? 04:00:11 Right, but if I say 1+9 = 9, and you say 1+9=10, the way to resolve that disagreement is to figure out what we mean. 04:00:34 It's not to say that I "don't grok addition". 04:00:46 Unfortunately, the people who say 0.999... != 1 don't know what they think 0.999... means. 04:01:18 pikhq_: Unless they know about the Surreal Numbers 04:01:26 All the notations that you're used to, and axioms that you're used to, have been invented and agreed on because some people found them useful or aesthetically pleasing. 04:01:46 Maybe someone doesn't like some consequence of the axiom of choice, so they decide not to use that axiom. 04:01:58 It makes their system nonstandard, but it doesn't make them wrong. 04:02:27 shachaf: DAMN YOU, BANACH-TARSKI 04:02:28 the axiom of choice really brings home to me just how much we don't know about infinity 04:02:44 Axiom of choice is use in system that uses that axiom, although in general I do not really like axiom of choice 04:02:45 See, you're talking about things that could make sense for someone using nonstandard mathematics. The issue is, *0.999... != 1 is almost always a statement out of mathematical ignorance, not a consequence of different axiom choice*. 04:02:50 I would like to see something about a world where mathematics applies to the real world 04:02:57 like, with most axioms, you intuitively know they're true, just can't prove them 04:03:01 And not just the school mathematics; the weird stuff too 04:03:19 So, for example, companies started using banach-tarski to mass produce objects 04:03:33 the parallel postulate is one where that isn't the case, but it's also possible to understand a universe where it isn't true 04:03:50 and in fact the fact that it does seem to apply to our universe was only relatively recently established and was far from certan 04:03:52 *certain 04:04:04 Oh no 04:04:09 meanwhile, the axiom of choice, both assuming it's false and assuming it's true lead to absurdities 04:04:25 (not contradictions, just situations that intuitively make no sense) 04:04:33 sadly I last saw this years ago and no longer can remember the examples 04:05:39 pikhq_: I think a typical disagreement with that equality is "0.999... is very close to 1, but not equal to 1". That suggests that people don't believe in the infinite sum but only in a finite prefix of it, which is probably reasonable in some sort of finitism that you could work out, even if they can't articulate it. 04:06:38 It's easy to show that 0.9... = 1 04:06:43 Anyway I don't want to be in the position of defending 0.999... /= 1, because that's silly. I'm just suggesting to be more charitable by default. 04:06:52 Open python and type in (1/9)*9 04:06:58 And it is equal to 1 04:07:00 I mean, duh 04:07:02 xD 04:07:17 ais523: ZF!C means you have a vector space without a basis, apparently. 04:07:23 there's got to be language+runtime combinations where that doesn't work 04:07:49 pikhq_: that's within my personal tolerance of weirdness, assuming that infinities are involved 04:07:51 `` echo 'print (1/9)*9' | python 04:07:52 0 04:08:00 although I'm in #esoteric, my tolerance of weirdness is pretty high 04:08:50 ais523: Intuitionistically the axiom of choice doesn't even need to be an axiom, it's just true. 04:08:52 But does 0.000... = 0? 04:09:08 hppavilion[1]: assuming that's not a joke, yes 04:09:20 But the law of excluded middle is not true. 04:09:29 shachaf: However, pierce's law is 04:09:32 Problem, formal logic? 04:09:34 most people who think that 1-0.999... is nonzero think that it's equal to 0.000...1 04:09:38 whatever that means 04:09:52 ais523: ε, probably 04:09:54 I've never heard that. 04:10:06 shachaf: hmm, how does including the middle let you prove the axiom of choice? 04:10:25 It doesn't. 04:10:56 I thought as much 04:11:03 It's just that "exists" and "or" mean something stronger in that logic. 04:11:03 presumably some other axiom is added to compensate? 04:11:09 ah right 04:11:31 fwiw, the main result of intuitionistic logic that I know of is f(¬¬x)=¬¬f(x) 04:11:44 also I have a physical ¬ key on my keyboard but use it so rarely I had to think for a while to figure out where it was 04:11:57 I'm actually not quite sure why it works. No axiom is added to compensate. 04:12:00 it's only on the UK keyboard layout so that we can use a UK keyboard to type both ASCII and EBCDIC (¬ is in EBCDIC) 04:12:35 The UK layout has AltGr, right? 04:12:36 ais523: WTH is ¬ EBCDIC!? 04:12:41 *in 04:12:48 I type ¬ with AltGr-\ 04:12:55 shachaf: yes, we have an altgr 04:13:00 it's not used for much by default 04:13:07 only the second | and € 04:13:09 I've been considering engineering a Python program that lets me type weird characters 04:13:19 Hmm, does EBCDIC also have ¦? 04:13:24 and I have no idea why we have two | keys (they produce different characters on many OSes but not on Linux so I can't demonstrate) 04:13:41 ¦ and |? 04:13:50 Broken and solid vertical bar. 04:13:54 hppavilion[1]: EBCDIC just makes different choices as to which characters are important than ASCII does 04:14:07 shachaf: that's a common set of characters to use for the keys, but not the only one I've seen 04:14:14 ais523: It doesn't seem like EBCDIC would even have room for other characters 04:14:22 It's 6 bit IIRC, and 2**6 = 64 04:14:27 hppavilion[1]: note that the backslash was originally invented so that you could type \/ and /\, so ¬ works fine 04:14:29 also EBCDIC is 8 bit 04:14:34 with many of them unused 04:14:39 Ah 04:14:43 ais523: It is? 04:14:44 I thought you liked power-of-2-bit bytes? :-P 04:14:45 Weird 04:14:58 ais523: Yes, which is why I didn't like EBCDIC 04:15:09 Am I thinking of another encoding that does 6 bits? 04:15:56 there's Baudot but it's five bits (with shift codes, thus it has 64 characters) 04:16:42 Unfortunately, ¬ is *not* one of the characters in EBCDIC with an invariant location. 04:16:56 (because of *course* EBCDIC has code pages) 04:17:16 indeed 04:17:27 ... And ¬ is encoded in different locations in different ones. 04:17:33 Wikipedia's example EBCDIC has a ±, it seems, and a soft hyphen 04:17:42 but I'm not sure all of them did 04:17:59 s/did/do/ 04:18:09 few people use EBCDIC nowadays, I hope at least 04:18:15 Monsterous though it might be, it's still around. 04:18:31 we can use it because we use technologies that lost the standards wars for fun sometimes 04:19:09 Many banks still have significant use of mainframes in day-to-day operations. 04:20:50 It's pretty much entirely incompatible with sane notions of operation, but that doesn't stop anyone. 04:21:43 And (of course) UTF-EBCDIC sees basically zero use. Just non-Unicode legacy charsets. 04:23:08 Guess what I had "fun" doing at my last job? 04:24:12 were you using Perl? It actually has an official EBCDIC version, for some reason 04:24:15 don't know how maintained it is 04:24:52 Nope. We were also not using EBCDIC ourselves, we were talking to a system that *did*. 04:26:04 that seems to be less bad than most other combinations 04:26:16 figure out what codepage it's using then just re-encode at the communications boundary 04:26:25 (potential issue: if it's inconsistent codepage-wise) 04:26:41 (other potential issue: if you're mixing text and binary and don't know which is which) 04:26:48 It was also not just EBCDIC text, but COBOL-defined data structures that *included* EBCDIC text. 04:28:39 ah right, that's harder 04:29:51 Long story short, I've written a COBOL parser. 04:30:59 O, finally you did 04:31:09 pikhq_: for the data structure or the language itself? 04:31:15 actually COBOL and SQL remind me a lot of each other 04:31:59 ais523: For the language's description of data structures. 04:33:56 Which then fed into an arbitrary-data-structure walker. 04:37:22 Do you ever forget you're browsing Wikipedia instead of esolangs.org and click "Random Page" expecting to see something even remotely interesting? 04:41:36 * oerjan tries and hits https://en.wikipedia.org/wiki/Siege_of_Thebes_(292%E2%80%93291_BC) 04:42:01 better than average, me thinks 04:42:30 Yep. 04:42:40 second try: https://en.wikipedia.org/wiki/Toronto_Telegram 04:43:57 https://en.wikipedia.org/wiki/The_Mello-Kings 04:44:24 this is unusually good, have they changed random article since last i tried 04:44:51 * Elronnd wonders why people in #esoteric, of all channels, have messed up the meaning of "random" 04:45:22 Elronnd: wat, it's what the wp link says 04:46:00 * Elronnd goes to wikipedia.org 04:46:23 what does the wp link say 04:46:39 https://en.wikipedia.org/wiki/Gustav_Andreas_Tammann 04:47:02 "Random article" 04:47:02 Elronnd: "Random article" 04:47:03 yes, I know 04:47:21 said wp link doesn't seem to say "improved" or "changed" or anything like that 04:47:40 I forget, is a space %20 or %2F 04:47:48 %20 04:47:59 https://en.wikipedia.org/wiki/VF-194_(1955-8) 04:49:09 https://en.wikipedia.org/wiki/Mythopoeic_thought 04:49:17 now I'm trying to remember what 2F is 04:49:21 !unicode U+002F 04:49:29 `unicode U+002F 04:49:30 ​/ 04:49:34 ah right 04:49:34 WHY CAN'T I GET A REALLY SHITTY ARTICLE 04:49:50 Select random Wikipedia article and then try to make a computer game about that subject 04:49:51 oerjan: random article patrol is actually a thing 04:50:00 if you want bad articles, you probably want to look in special:Newpages 04:50:09 "patrol"? 04:50:23 oerjan: basically, a systematic way to improve the encyclopedia 04:50:31 in random article patrol you generate random articles then try to improve the 04:50:43 note that some articles have higher probability in random article than others; those are more likely to be improved 04:50:54 ais523: i recall just a few years ago, and i tended to hit stubs or boring lists everywhere 04:51:19 (the way it works is that each article is associated with a random real number between 0 and 1, and random article generates another random number in that range and then looks for the next-highest number on an article) 04:51:25 so, they've improved the randomness, check 04:51:40 ais523: *cough* surely not a real. 04:51:59 pikhq_: well it's not stored infinitely accurately 04:52:03 Well, I mean, it would be a random number that would fit in the reals, but surely they're not generating reals. :) 04:52:03 so it's more of a float 04:52:15 although it's possible it's fixedpoint instead 04:52:22 it's a type that's meant to act like a real, at least 04:52:40 * ais523 wonders about the concept of random computable reals 04:52:53 I think you could do it via generating digits lazily 04:53:15 Guaranteeing uniformity would be trickier though. 04:53:38 Or would it? 04:53:41 Hmm. 04:55:51 ais523: using linux? chances are high that you can press ctrl+shift+u, then type hex to enter a character by code; e.g. ctrl+shift+U, 2, f, space 04:56:06 deltab: for me that works in some programs but not others 04:56:13 my IRC client is one where it doesn't 04:56:32 yeah, depends on the toolkit used 04:56:36 the really weird thing is that sometimes it does show the underlined u, but then cancels out of it as soon as I press a digit 04:57:04 finally a list https://en.wikipedia.org/wiki/McAdam_(surname) 04:57:10 where I don't know what the trigger behind the "sometimes" is, but I can often change whether it works or not by pressing alt-tab a few times (ending back up at the same program) 04:57:16 My IRC client works with ctrl+shift+U 04:58:03 ais523: huh, I've not seen that 04:58:27 the compose key is similar, it will or won't work for no obvious reason but pressing alt-tab a few times fixes it 04:58:33 well, often fixes it 05:00:35 at least it actually does work, when it's working 05:00:40 rather than showing an underlined u that doesnt do anything 05:11:15 IRC client I am using cannot send non-ASCII character at all, although you can receive messages containing non-ASCII characters 05:11:42 Elronnd: I love crl+shift+U :) 05:11:56 Also the keyboard is read by xterm 05:12:32 about the two vertical bars: it seems that in the early days of ASCII (1967), some people wanted ! to instead display as | in mathematical contexts, while others wanted a separate character code for |, and the compromise was that a broken bar would be added so that it wouldn't be confused with the !-vertical-bar 05:13:17 hence the broken bar symbol on keyboards 05:14:43 later (1977) the separate vertical bar was made solid, but the broken form remained in keyboard standards 05:15:50 and somehow later got itself encoded as its own character in ISO 8859 05:17:50 http://www.siao2.com/2006/02/24/538496.aspx#comment-50354 05:18:24 Why isn't "!yield*" allowed in JavaScript? At least Node.js seem to disallow it 05:19:39 http://peetm.com/blog/?p=55 05:29:06 -!- EgoBot has quit (Ping timeout: 240 seconds). 05:29:27 -!- EgoBot has joined. 05:33:02 zzo38: if I'm reading the spec right, it's because ! wants a UnaryExpression, and a YieldExpression isn't one 05:34:47 deltab: OK, although I am not sure why it has to be that way. I got it to work by put parentheses but I think it ought to work even without it? 05:38:43 -!- bb010g has quit (Quit: Connection closed for inactivity). 06:04:47 zzo38: How could `yield` /possibly/ be an acceptable argument to `!`? How? 06:32:52 Someone should make an esolang with zeroth-class data 06:42:53 Which means what? 06:45:44 you don't use the data, the data uses you hth 07:16:24 -!- tromp_ has quit (Remote host closed the connection). 07:37:08 -!- gniourf has quit (Ping timeout: 272 seconds). 07:39:17 -!- oerjan has quit (Quit: Gravity?). 07:47:23 @tell oerjan you don't use the data, the data uses you hth <- Pretty sure that's been suggested on the "Ideas" page under Soviet Russia htmh 07:47:23 Consider it noted. 08:17:07 -!- tromp_ has joined. 08:20:34 [wiki] [[Befunge]] https://esolangs.org/w/index.php?diff=46362&oldid=46156 * 64.222.227.34 * (+215) /* Befunge-98 and beyond */ 08:21:20 -!- tromp_ has quit (Ping timeout: 245 seconds). 08:24:36 -!- hppavilion[1] has quit (Ping timeout: 248 seconds). 08:39:15 -!- bender| has joined. 09:33:20 -!- AnotherTest has joined. 09:37:58 -!- Reece has quit (Read error: Connection reset by peer). 09:41:54 -!- gniourf has joined. 10:10:40 -!- ais523 has quit (Ping timeout: 256 seconds). 10:17:26 -!- asie has quit (Ping timeout: 240 seconds). 10:28:55 -!- bender| has quit (Ping timeout: 260 seconds). 10:47:16 -!- heroux has joined. 11:02:43 -!- J_Arcane has quit (Ping timeout: 276 seconds). 11:10:02 -!- sebbu has quit (Ping timeout: 250 seconds). 11:17:52 -!- tromp_ has joined. 11:20:56 -!- heroux has quit (Ping timeout: 240 seconds). 11:22:08 -!- tromp_ has quit (Ping timeout: 250 seconds). 11:28:24 -!- Phantom_Hoover has joined. 11:28:35 -!- heroux has joined. 11:31:11 my bf interpreter has been running a program that prints 99 bottles of beer 11:31:16 for 9 hours 11:31:50 all the output is printed at the end so i wasn't even sure if it was still working or what 11:32:00 fired up gdb, attached that process 11:32:14 56 Bottles of beer on the wall <- it's here 11:32:20 after 9 hours 11:34:10 was this interpreter written in malbolge? 11:34:34 it's written in sed 11:34:45 oh god, worse! 11:39:33 wish there was a way to run grep on a certain offset in /proc/pid/mem 11:39:50 -!- sebbu has joined. 11:41:07 something something dd|grep 11:42:55 i can get the start offset of the heap 11:42:59 that won't change 11:43:04 not sure where to stop though 11:49:08 Read from /proc/pid/maps first? 11:49:59 `` grep '\[heap\]' /proc/self/maps 11:50:00 0062b000-0064d000 rwxp 00000000 00:00 0 [heap] 11:50:08 yes but that changes too fast 11:50:34 Well, depending on your process. 11:50:52 in this particular process it changes too fast 11:51:05 (fastly?) 11:51:09 (quickly?) 11:51:13 You can send a SIGSTOP to it, do your stuffs, and send a SIGCONT. 11:51:16 it changes too often 11:51:19 right 11:51:42 i was thinking about ptracing it but stopping it seems easier 11:51:55 thanks 11:52:02 -!- heroux has quit (Ping timeout: 245 seconds). 11:52:46 If you want easy (instead of DIY), you could always attach gdb and use its "find" command to search for things. 11:52:49 Though I don't think it does regexps. 11:53:21 this is a program that's not even compiled with debugging symbols :\ 11:53:31 It doesn't have to be, for that. 11:53:34 well ok 12:01:01 -!- heroux has joined. 12:06:48 -!- madyach has joined. 12:41:09 -!- zadock has joined. 12:49:14 -!- tromp_ has joined. 12:52:24 -!- zadock has quit (Quit: Leaving). 12:54:29 -!- tromp_ has quit (Ping timeout: 276 seconds). 12:57:14 -!- madyach has quit (Ping timeout: 252 seconds). 13:03:31 -!- anybody_ has joined. 13:04:19 -!- anybody_ has quit (Client Quit). 13:05:15 -!- PinealGlandOptic has joined. 13:14:54 -!- Frooxius has quit (Read error: Connection reset by peer). 13:15:07 -!- J_Arcane has joined. 13:39:58 -!- heroux has quit (Ping timeout: 250 seconds). 13:55:22 -!- heroux has joined. 14:29:01 -!- tromp_ has joined. 14:30:17 -!- Frooxius has joined. 14:32:58 -!- heroux has quit (Ping timeout: 240 seconds). 14:35:44 -!- heroux has joined. 14:43:11 -!- LexiciScriptor has joined. 15:55:16 -!- heroux has quit (Ping timeout: 248 seconds). 15:57:39 -!- heroux has joined. 16:05:24 -!- heroux has quit (Ping timeout: 256 seconds). 16:09:43 -!- atslash has quit (Remote host closed the connection). 16:09:49 -!- heroux has joined. 16:12:53 -!- tromp_ has quit (Remote host closed the connection). 16:22:08 -!- Treio has joined. 16:26:13 -!- tromp_ has joined. 16:29:35 -!- sebbu has quit (Ping timeout: 240 seconds). 16:31:33 -!- boily has joined. 16:31:36 @metar CYQB 16:31:37 CYQB 061600Z 23009KT 30SM FEW045 BKN130 M06/M12 A3016 RMK SC2AC5 SLP222 16:34:22 -!- p34k has joined. 16:44:31 -!- Reece` has joined. 17:02:42 -!- boily has quit (Quit: LAMINAR CHICKEN). 17:02:53 <\oren\> @metar CYYZ 17:02:53 CYYZ 061600Z 28012G17KT 15SM SCT025 BKN035 01/M05 A3020 RMK CU3SC4 SLP235 17:10:47 @metar EGLL 17:10:48 EGLL 061650Z AUTO 20025G40KT 9999 -RA BKN029 BKN045 12/06 Q0991 TEMPO RA 17:10:54 A bit windy today. 17:29:18 -!- tromp_ has quit (Remote host closed the connection). 17:34:14 -!- variable has joined. 17:34:48 -!- variable has changed nick to trout. 17:35:14 -!- trout has changed nick to function. 17:35:20 -!- function has changed nick to constant. 17:36:46 -!- constant has quit (Remote host closed the connection). 17:37:10 -!- variable has joined. 17:39:27 -!- variable has quit (Remote host closed the connection). 17:43:26 -!- hydraz has quit (Quit: Bai.). 17:43:35 -!- hydraz has joined. 17:43:35 -!- hydraz has quit (Changing host). 17:43:35 -!- hydraz has joined. 17:43:59 -!- heroux has quit (Ping timeout: 264 seconds). 17:45:11 -!- Reece` has quit (Ping timeout: 264 seconds). 17:45:29 -!- lynn has joined. 17:45:38 -!- tromp_ has joined. 17:49:47 -!- Reece` has joined. 17:59:32 [wiki] [[Talk:Brainfuck]] https://esolangs.org/w/index.php?diff=46364&oldid=46091 * YoYoYonnY * (+0) /* Would BF still be TC with do-while loops? */ 18:00:17 -!- sebbu has joined. 18:06:06 -!- heroux has joined. 18:38:49 -!- boily has joined. 19:29:10 -!- FiredBall-0x71 has joined. 19:29:17 http://www.pearltrees.com/pvpeliter/laptop-disini-bought-governor/id15409744#item167481741, , xWindow 10 ENTERPRISE , FREE CLASSIFIED OS FROM THE MOST HIGH HAS BEEN RELEASED , CLICK ON THE LINK THAT POP UP AND CLICK DOWNLOAD ... . DON'T FORGET TO JOIN ##Astara ... . 19:29:56 Can someone kick that guy 19:31:16 real fast nora 19:31:59 <\oren\> let's just spam them bak 19:32:47 nice idea 19:32:48 should I write a bot to PM them constantly? 19:33:58 * FiredBall-0x71 come join ##astara prince 19:34:16 FiredBall-0x71: how about fuck you 19:34:32 -!- FiredBall-0x71 has left. 19:36:20 -!- b_jonas has quit (Quit: Changing server). 19:37:15 <\oren\> i wonder how much spammers make per hour 19:42:22 <\oren\> http://www.orenwatson.be/fontdemo.htm Best font cool terminal monospace hacker haxxor typeface neoletters matrix neo letters. I couldn't believe how cool this font look on my terminal with irssi nano bash c c++ perl python brainfuck malbolge intercal befunge. it the best font ever 19:42:55 <\oren\> is that a good impersonation? 19:43:26 -!- b_jonas has joined. 19:44:15 0x9 out of 0xA 1337h4xx0rz prefer it! 19:46:00 do you have both ß in there? 19:46:04 <\oren\> yup 19:46:21 great 19:47:01 -!- heroux has quit (Ping timeout: 245 seconds). 19:47:12 <\oren\> it the best font ever with support english deutsch espanol italiano greek cyrillic katakana hiragana etc math arrows even runic. best font for programming irc dwarf fortress nethack and more. 19:47:27 nice font 19:47:32 is should install it, but i'm too lazy 19:47:40 same 19:50:53 Huh what spam where. 19:50:58 Oh, too late to do anything. 19:51:06 he was spamming in #vim too 19:51:45 ##c and #perl as well. 19:51:45 what do you do in a vim channel? 19:51:56 -!- heroux has joined. 19:51:57 Edit files? 19:52:03 myname: talk aboutu vim? 19:52:10 weird 19:52:12 myname: what do you do in a #debian channel 19:52:32 not being there 19:52:45 i am almost exclusively in offtopic channels 19:53:12 `? spam 19:53:24 Spam is a delicious meat product. See http://www.spamjamhawaii.com/ 19:53:33 <\oren\> I would use vim if it had hints at the bottom like nano does 19:53:52 \oren\: you use nano 19:53:58 <\oren\> yeah 19:53:58 hints for what? 19:54:22 myname: run nano and you'll see what \oren\ means 19:54:30 i know nano 19:54:40 but what would you hint in vim? 19:54:43 :wq? 19:54:47 <\oren\> yeah 19:54:47 sounds silly 19:55:04 if you know :, you know wq 19:55:34 <\oren\> well yah I know ed, but most people don't 19:55:44 the reason nano has hints is because it needs those 19:56:05 <\oren\> and I can never remember the commands that aren't : commands 19:56:21 you should play more nethack 19:56:54 <\oren\> so when I'm dropped into vim by e.g. svn, I have to just go into : and use it like ed. 19:57:45 playing nethack helped me a lot getting my head around this abbreviation stuff that's going on 19:57:48 -!- heroux has quit (Ping timeout: 250 seconds). 19:59:02 -!- AlexR42 has joined. 20:01:45 -!- madyach has joined. 20:02:14 -!- heroux has joined. 20:07:46 -!- madyach has quit (Ping timeout: 250 seconds). 20:12:30 why do people keep making thread libraries where if an uncaught exception is raised in a thread, it only terminates that thread rather than aborting the whole process? 20:13:13 it's just dangerous. leads to errors getting unnoticed, while the user wonders why the program doesn't react. 20:19:36 just use more erlang 20:43:16 So they terminate the thread and then don't notify you that they did? 20:45:53 tswett: they notify you when you join the thread 20:46:00 but you can only wait for one thread to join 20:46:12 so you would need an extra thread for each thread if you wanted to catch it immediately 20:46:28 and even then it would be a waste, because the FUCKING EXCEPTION CODE CAN JUST CALL abort() INSTEAD! 20:49:45 And sadly, this isn't really only the responsibility of the thread library. It's more handled by the exception library. 20:50:11 Now, with these libraries, suppose you've got two different threads, each of which is going to produce some value. You want to wait until one thread or the other produces the value and get the value from whichever thread it happened to be. 20:50:14 Is there a way to do that? 20:50:16 So it's a whole language design issue that you can't just change easily. 20:50:37 tswett: sure, you use some higher level structures, like futures or condition variables for that 20:50:53 tswett: the raw thread thing itself doesn't want to do that, because it's lower level 20:51:09 but there's lots of high level abstractions you can use, or write one with low level condition variables if you want 20:51:17 but this is for the case of unexpected errors 20:51:47 -!- heroux has quit (Ping timeout: 264 seconds). 20:52:15 Reasonable languages like the C++ standard library (threads and exceptions) don't do this. 20:52:29 What library are you using, exactly? 20:53:05 tswett: Ruby does this by default, and currently I'm trying to read up a bit about rust, and apparently its exceptions (panics) do this too. 20:53:08 It's crazy. 20:53:20 It just makes no sense. 20:53:36 Terminating the thread instead of just calling abort() actually requires extra work for the implementation. 20:53:39 It's just stupid. 20:53:54 (perl Coro does this by default as well) 20:54:39 There's always workarounds of course, eg. you can put a try-catch at the top level function of each thread, to catch the exception, and call abort from it, but those don't work if you're not the one starting the thread. 20:56:32 -!- tromp_ has quit (Remote host closed the connection). 21:05:16 -!- heroux has joined. 21:09:22 -!- tromp_ has joined. 21:13:28 [wiki] [[HALT]] https://esolangs.org/w/index.php?diff=46365&oldid=46355 * 85.179.165.201 * (+2) 21:13:40 -!- heroux has quit (Ping timeout: 248 seconds). 21:18:06 So I'm playing with this Gray-Scott thing: https://pmneila.github.io/jsexp/grayscott/ 21:18:12 A reaction-diffusion system. 21:18:50 I'm exploring the feed rate range with the death rate set to 0.061. 21:19:59 Specifically, with feed rates less than a certain amount... 21:21:42 Feed rates of about 0.03 and below. 21:22:24 There's a rather neat behavior here. So, with these feed rates, the landscape fills with solitons. 21:22:40 There's a certain stable density range. Interesting stuff happens outside this range. 21:23:08 If the density is too low, then solitons reproduce, increasing the density. 21:23:14 -!- heroux has joined. 21:23:44 More interesting: if the density is too high, then nearby solitons start to oscillate in tandem. These oscillations increase in magnitude until a bunch of the solitons suddenly die. 21:25:06 Surviving solitons then move into the resulting empty space, perhaps even reproducing. 21:26:55 Decreasing the feed rate lowers the stable density. So you can cause mass die-offs that way if you want. 21:31:33 -!- hppavilion[1] has joined. 21:32:05 With a feed rate of 0.023, it takes a long time for the solitons to reach this stable density. 21:36:19 With a feed rate of 0.022, it looks like there is no stable density. Whenever there's a die-off, the reproduction caused by this die-off causes another die-off. 21:39:29 -!- tromp_ has quit (Remote host closed the connection). 21:39:44 And with a feed rate of 0.02, it looks like a population cannot survive. That feed rate is so low that even a lone soliton oscillates and dies. 21:40:59 Lemme try exploring in the other direction now. 21:45:04 As the feed rate increases, solitons begin to reproduce more eagerly. 21:45:16 Now, the way soliton reproduction works is that a soliton elongates and then breaks in two. 21:45:55 Once the feed rate increases to 0.031, the elongated soliton doesn't necessarily break in two any more; it just stays that way. A worm. 21:46:29 (All these are in the presets.) 21:48:10 When the feed rate gets to about 0.036, worms begin merging with the solitons at their tips. 21:50:40 As a result, worms dominate the world. 21:50:44 Solitons usually don't survive too long. 21:51:37 (Because they get "eaten" by worms.) 21:51:58 At 0.039, worms can start to merge with each other and form three-way junctions. 21:54:11 At feed rate 0.05, this happens aggressively; worm tips almost totally vanish as they plunge into other worms and make these junctions. 21:54:18 (Hot.) 21:57:29 ELK ASM now has 0x7C instructions :) 21:57:33 (124, for n00bs) 21:58:04 It got inflated because I need instructions for EVERY type- e.g. I have ADD and ADD.FLOAT and ADD.DOUBLE and ADD.UN 21:59:40 I'm trying to decide whether to add JMP..FLOAT, JMP..DOUBLE, and JMP..UN, or to only have JMP, JMP.Z, and JMP.NZ, and combine those with existing condition getters 22:00:14 I'm leaning towards the latter, but I already have CSET (conditional set) for all the operations, so... 22:01:25 tswett: Feed rate 1 22:03:16 I think I'll do the latter, but keep the CSET instructions (since combining them with a condition clobbers the destination no matter what, but I want it to not change the destination if the condition fails) 22:10:00 -!- AlexR42 has quit (Quit: My Mac has gone to sleep. ZZZzzz…). 22:14:55 There doesn't seem to be much in the way of qualitative change increasing through feed rate 0.068... 22:15:53 Feed rate 0.069, rings formed by the worms start to contract and disappear. 22:20:12 Around 0.074, an important change happens: curves in worms start to contract, instead of looping out the way that rivers and streams do. 22:20:36 -!- oerjan has joined. 22:21:07 @messages- 22:21:08 hppavilion[1] said 14h 33m 44s ago: you don't use the data, the data uses you hth <- Pretty sure that's been suggested on the "Ideas" page under Soviet Russia htmh 22:21:10 MAYBE 22:21:52 oerjan: It has been 22:22:04 I WASN'T DENYING THAT 22:22:50 HELLØRJAN. 22:23:17 BOD KVELDY 22:23:42 I'm making a bytecode VM called ELK designed as a vastly inferior alternative to .NET 22:24:01 Something we can write too many compilers for and basically have a BF that interacts with a Thue and stuff 22:24:30 You could look how I designed QUACKVM for another way that VM instruction set can be defined 22:24:33 delicious science rumors: http://www.sciencemag.org/news/2016/02/woohoo-email-stokes-rumor-gravitational-waves-have-been-spotted 22:24:57 It has 0x7F instructions so far 22:25:00 At about 0.083, little rings collapse quickly, and four-way junctions tend to split. 22:25:31 Legendary ‘mammoth steak’ turns out to be sea turtle 22:26:05 Going from 0.083 to 0.084, worms tips now paradoxically retract instead of elongating. 22:27:23 The overall feeling is that the worms are now similar to lines with tension, trying to become as short as possible. 22:27:28 I should logread to understand what the fungot is going on here, but I like my mammoth steaks to remain mysterious. 22:27:28 boily: but everyone else is withdrawing time for their convenience before their students' :( 22:27:41 (wait for Feb 11 for the truth) 22:28:08 boily: No, I just noticed that on the Science site (sciencemag.org, the one oerjan posted) and copied it to here 22:29:39 At feed rate 0.093, all three-way junctions suddenly become unstable and snap. All worms contract into solitons. 22:30:17 tswett: OK, I have to ask 22:30:18 (about the waves, not the mammoth) 22:30:24 What the hell are you talking about? 22:30:30 hppavilion[1]: https://pmneila.github.io/jsexp/grayscott/ 22:30:41 hppavilion[1]: aha 22:30:42 tswett: Thank you 22:30:57 At feed rate 0.098, solitons suddenly become unstable and die. 22:33:29 0.28 is kewl 22:34:06 -!- AnotherTest has quit (Quit: ZNC - http://znc.in). 22:40:11 -!- tromp_ has joined. 22:40:51 that U-Skate world is oh so slow 22:41:21 i thought everything would shrink to a point until i realized bends grew 22:42:01 hm what's the meaning of U-Skate 22:44:31 tswett, should you not be altering the death rate too 22:44:40 -!- tromp_ has quit (Ping timeout: 250 seconds). 22:44:53 oerjan: it's a specific shape... lemme look up a page about it. 22:45:05 oerjan: http://mrob.com/pub/comp/xmorphia/uskate-world.html 22:45:47 For what it's worth, I think that even though the uskate world looks like black stuff in a sea of orange, it's still better to think of it as orange stuff in a sea of black. 22:46:57 Phantom_Hoover: it's not mandatory. 22:47:22 right but which death rate are you testing on 22:47:35 -!- heroux has quit (Ping timeout: 264 seconds). 22:48:45 -!- p34k has quit. 22:53:36 0.061. 22:54:02 -!- heroux has joined. 22:59:43 -!- jaboja has joined. 23:05:30 -!- tromp_ has joined. 23:14:38 -!- LexiciScriptor has quit (Quit: LexiciScriptor). 23:16:25 -!- tromp_ has quit (Remote host closed the connection). 23:21:53 -!- tromp_ has joined. 23:22:36 With feed rate 0.023 and death rate 0.062, a small number of solitons will eventually spread out and fill the screen. Increase the death rate to 0.063, and this doesn't happen any more—the solitons are no longer capable of reproducing. 23:24:00 -!- lynn has quit (Ping timeout: 256 seconds). 23:24:30 Increase the death rate just a little more, to 0.065, and it looks like solitons can no longer survive. 23:32:11 -!- tromp_ has quit (Remote host closed the connection). 23:33:56 -!- heroux has quit (Ping timeout: 240 seconds). 23:34:28 -!- heroux has joined. 23:37:58 -!- lynn has joined. 23:58:06 -!- Sgeo has joined.