00:00:08 shachaf: I can't find what pointwise indexing means 00:00:27 I'm confused by the [punctuation] 00:01:06 and [] is somewhat overloaded. 00:01:08 `? nit 00:01:10 nit? ¯\(°​_o)/¯ 00:01:32 `learn Nits are there to be picked. 00:01:35 Learned 'nit': Nits are there to be picked. 00:02:35 -!- p34k has quit. 00:03:37 nits are louse eggs hth 00:03:39 int-e: um it means it's optional? 00:04:13 oerjan: [...] 00:04:26 OKAY 00:04:34 well, technically that's also optional hth 00:04:45 o-kay 00:05:06 `? optional 00:05:07 optional? ¯\(°​_o)/¯ 00:05:09 `learn optional. 00:05:13 Learned 'optional.': optional. 00:05:19 oops 00:05:31 `` cd wisdom; grep '\.\.\.' * 00:05:32 a small bug 00:05:42 arothmorphise:arothmorphise ... antormo... antrohm... ant... oh bugger. This should go in the `misspellings of antrhrop... atnhro...' entry. \ code:[11,11,11,15,15,23,12],[5,5,5,3,53,45,16,26,00,20,15,16,22,25,45,91,32,11,15,27,06,01,11,01,47,22,30,13,43,21,11,13,29,61,65,17,19,12,28,17,11,01,23,20,16,20,81,18,32,25,58,22.,1985,10.301350435,1555466 00:05:44 `cat bin/learn 00:05:45 ​#!/bin/bash \ topic=$(echo "$1" | lowercase | sed 's/^\(an\?\|the\) //;s/s\?[:;,.!?]\? .*//') \ echo "$1" >"wisdom/$topic" \ echo "Learned '$topic': $1" 00:06:23 oh right, the space is not optional if it's to remove any of the rest 00:06:24 `` cd wisdom; grep -l '\.\.\.' * 00:06:25 arothmorphise \ code \ hthmonoid \ grep: le: Is a directory \ learn \ `learn \ northumberland \ grep: ¯\(°_o): Is a directory \ grep: ¯\(°​_o): Is a directory \ \oren\ \ procrastination \ qdb \ quoteformat \ remorse 00:06:34 `rm wisdom/optional. 00:06:36 No output. 00:06:43 `? northumberland 00:06:44 Northumberland may be today a sparsely populated country... but SOON! THE NORTHUMBRAINS SHALL RISE! 00:07:22 `culprits wisdom/northumberland 00:07:26 oerjan elliott Bike FreeFull Taneb 00:07:33 `? bike 00:07:34 Bike is from Luxembourg. 00:08:29 hppavilion[1]: It means each element in the tuple gets indexed on its own. 00:08:49 shachaf: OK, and what does that mean precisely? 00:09:07 shachaf: https://en.wikipedia.org/wiki/Tuple does not speak of "indexing" 00:09:11 hppavilion[1]: Try figuring out what indexing would mean and I'll tell you whether it's right. 00:09:16 @troll 5d6 00:09:16 int-e: 21 00:09:24 Well, this is indexing in the usual sense. 00:09:39 (x,y,z)[0] = x and so on 00:09:50 shachaf: Do you add the values? 00:09:57 xD 00:10:19 shachaf: So... hm... OH! Is it at all like ~ in INTERCAL? 00:10:27 The SELECT operator? 00:10:28 I don't know INTERCAL. 00:10:37 *reads about the ending phase* ... could there be an infinite loop of cleanup steps... <-- you should reask that with ais523 around hth 00:10:54 probably 00:10:55 I don't think it's that. 00:11:02 shachaf: x~y is all the bits of x for which the corresponding bit in y is 1, right-justified 00:11:12 (Or maybe I got which side is which messed up) 00:11:13 shachaf: it's ALL CAPS, what else could it be... I mean now that COBOL is dead? 00:11:17 * int-e runs. 00:11:21 int-e: there can be an infinite loop of cleanup steps, yes 00:11:28 shachaf: Oh :/ 00:11:35 it's a little hard to pull off because cards are typically designed to stop things triggering then 00:11:52 help when did this turn into a mtg conversation 00:12:06 shachaf: oerjan looking through logs 00:12:10 shachaf: What I mean is the compostion of e.g. (17, 92, 12) and (1, 2) equal to (17, 92)? 00:12:24 heys523 00:12:48 hppavilion[1]: What are the domains and codomains of those arrows? 00:13:01 shachaf: They're numbers 00:13:06 Which numbers? 00:13:07 shachaf: Natural numbers 00:13:14 Hm... 00:13:19 You have to choose. 00:13:28 shachaf: They're natural numbers 00:13:45 hppavilion[1]: what shachaf means is that an arrow is not determined by its tuple alone 00:13:48 shachaf: Or do you mean which numbers in particular for those arrows? 00:14:00 oerjan: Ah 00:14:28 An arrow : N -> M is an N-tuple of numbers < M 00:14:30 well, graphs are categories 00:14:32 -!- sphinxo has joined. 00:14:47 (no!) 00:14:47 So (17, 92, 12) : 3 -> M 00:14:51 shachaf: Ah, I think I transcribed it to my notes wrong 00:14:53 But M could be 100 or 1000 00:14:56 reflexive, transitive relations are 00:15:05 `relcome sphinxo 00:15:07 ​sphinxo: Welcome to the international hub for esoteric programming language design and deployment! For more information, check out our wiki: . (For the other kind of esoterica, try #esoteric on EFnet or DALnet.) 00:15:12 (that's the example that I wanted) 00:15:36 shachaf: Oh, so the arrows map numbers to all numbers greater than them, right 00:15:40 boily: thanks 00:15:54 ? 00:15:58 So what's the bees knees in esoteric langs? 00:16:23 sphinxo: in terms of newly popular? best-known? 00:16:40 ais523: newly popular 00:16:40 sphinxo: well your puns seem to be up to par... welcome! 00:17:06 hmm, not sure if any esolangs have really caught on since Three Star Programmer 00:17:11 int-e: whoa whoa whoa, when did this turn into a linear logic conversation 00:17:27 shachaf: you lost me 00:17:41 int-e: wait, what pun 00:17:46 oh 00:17:47 oerjan: the bees one 00:18:17 sphinxo: One that isn't popular- but be used by at least one person in the world someday, if I'm being generous- is a proof assistant I myself made called Thoof 00:18:24 i didn't notice it was a pun 00:18:37 sphinxo: Based on Thue, which is a great language you should check out if you haven't already 00:18:48 oerjan: flew right over your head, eh... 00:19:05 shachaf: Wait, my brain is turning on now 00:19:17 hppavilion[1]: is it on github? 00:19:28 sphinxo: Yes, I'll link you 00:19:55 sphinxo: But there are no published docs yet; however, I can publish the as-of-yet incomplete tutorial if you like 00:19:56 xD 00:20:04 hppavilion[1]: Oh wait I think i've found it, in python right? 00:20:20 Oh, I thought you were talking about hppavilion[1]'s brain. 00:20:22 sphinxo: Yep 00:20:27 The joke seemed a little drawn out. 00:20:35 int-e: well, bee's knees did fit there without having to reinterpret it. 00:21:20 shachaf: Gah! Your and sphinxo's nicks arethe same length and both start with s! 00:21:26 Now I'll always be confused! 00:21:36 you're already always confused hth 00:21:41 shachaf: Oh right 00:22:11 boily: have you figured out the mysterious category twh 00:22:34 hppavilion[1]: a single starting letter seems a bit little to be confusing. 00:22:41 which mysterious category? 00:22:50 Oh, apparently this category has a name. 00:22:50 oerjan: Yeah, but it is 00:23:08 shachaf: isn't it just a subcategory of Set 00:23:13 In the spirit of self promotion, i'd like to present one of my first forays into the world of #esoteric 00:23:24 ya standard bf compiler 00:23:27 written in ocaml 00:23:33 generating java bytecode 00:24:10 oerjan: yes hth 00:25:46 -!- sphinxo has left ("WeeChat 1.4"). 00:26:13 -!- sphinxo has joined. 00:27:32 git.io/v2yj9 00:28:02 sphinxo: weird mix of languages :-) 00:28:06 (in here, that's probably a good thing) 00:28:23 makes sense though, ocaml's good at compilers, jvm is probably the most portable asm 00:28:29 ais523: Do you understand par in linear logic? TWH 00:28:40 -!- tromp has joined. 00:28:50 shachaf: what do you mean by par? I fear the answer is no 00:28:56 I understand the subsets of linear logic I use in my work 00:28:57 The upside-down ampersand. 00:29:03 ah, in that case no 00:29:05 Also smetimes written # 00:29:14 ais523: How about _|_? 00:29:15 ais523: it was my first time doing ocaml actually 00:29:22 Or ?A the exponential thing? 00:29:26 but I didn't really like it and went back to haskell 00:29:45 shachaf: _|_ is just "arbitrary false statement" in most logics 00:29:56 sphinxo: Oh, that's where I remember you from. 00:30:02 I sort-of have a vague idea of how ? works but not enough to put it into words 00:30:22 ais523: Well, there's _|_ and there's 0 00:30:34 _|_ is the identity of # 00:30:46 ah right 00:30:52 linear logic sort-of has SCI syndrome 00:30:55 shachaf: yeah i'm the one generally asking the silly questions 00:30:56 but possibly even worse 00:31:10 Spinal Cord Injury? 00:31:32 (SCI is an affine logic, which has the problem that ('a * 'b) -> 'c and 'a -> ('b -> 'c) aren't isomorphic and most language constructs need to work both ways round) 00:31:36 syntactic control of interference 00:31:45 This game semantics interpretation made the most sense to me. 00:31:57 ais523: Oh, it has both an internal hom and a product but they're not adjoint? 00:31:59 That's interesting. 00:32:13 The product has no right adjoint and the internal hom has no left adjoint? 00:32:19 indeed 00:32:33 it causes utter chaos at the category theory level 00:32:41 in terms of programming it, it's only mildly annoying 00:32:47 y'all played tis-100? I imagine that'd be right up you guys/girls boats 00:33:05 ais523: Sounds sort of reasonable. Maybe. 00:33:06 annoying enough, though, that SCI errors are something that I have to keep correcting in other people's code 00:33:34 ais523: Anyway in this game semantics interpretation, when you have A#B, you run two games in parallel, one for A and one for B. 00:33:40 quite a bit of work on my thesis was trying to create a more categorically sensible SCI 00:33:40 And you only have to win one of them. 00:33:59 So for instance A # ~A is always true, because if you get a refutation on one side you can use it on the other side. 00:34:07 it turns out that it has hidden intersection types 00:34:18 ais523: Hmm, I should read your thesis. 00:34:23 shachaf: hmm, that makes me think of a programming language construct 00:34:35 in which you give two terms, it returns one of its argument 00:34:51 but it's guaranteed to return something other than bottom unless both arguments are bottom 00:35:10 * ais523 wonders if the Haskell people would consider that pure 00:36:35 ais523: Haskell people probably want a guarantee that they're equal unless they're bottom. 00:36:42 https://wiki.haskell.org/Unamb 00:36:50 good name for it :-) 00:37:14 sphinxo: I played it. It's neat. 00:37:16 now I'm wondering if it's useful 00:37:19 I guess you could do sorting with it 00:37:35 Sure it's useful. 00:37:39 one argument an O(n log n) worst case, the other an O(n) best case that sometimes blows up 00:37:47 http://conal.net/blog/tag/unamb 00:39:39 -!- tromp has quit (Remote host closed the connection). 00:43:23 ais523: Oh, A # B is also ~(~A x ~B) 00:45:51 -!- heroux has quit (Ping timeout: 250 seconds). 00:55:27 -!- sphinxo has quit (Quit: WeeChat 1.4). 01:01:45 -!- heroux has joined. 01:02:54 -!- llue has quit (Quit: That's what she said). 01:03:03 -!- lleu has joined. 01:07:37 mwah ah ah. Tiamat is dead! 01:08:25 dragonskin cloak is miiiiine! 01:09:05 -!- tromp has joined. 01:11:40 -!- carado has quit (Quit: Leaving). 01:15:28 -!- Phantom_Hoover has quit (Read error: Connection reset by peer). 01:15:42 -!- mad has joined. 01:16:16 will someone explain this to me: why some programmers use C but have an aversion to C++ 01:17:02 (especially on non-embedded platforms) 01:19:32 Because the things that C++ is good at, C is about as good at, and the things that C++ does better than C, other languages do significantly better. So, C++ is a giant pile of complexity with minimal benefits. 01:21:12 er, no, there is one class of stuff where C doesn't have the tools (like, you can do it but it's cumbersome), and java/C#/etc can't do it because of the mandatory garbage collector 01:21:40 once you have lots of dynamic sized stuff C++ has a large advantage over C 01:22:24 You know that there's languages out there besides C-family languages, Java-family languages, and P-family languages, right? 01:22:46 -!- lynn has quit (Ping timeout: 252 seconds). 01:23:01 this is why C++ is popular for making games (too much dynamic sized stuff for C, can't use java/C# because garbage collector creates lags) 01:23:06 P-family lanugages? 01:23:18 ais523: Gregor's joking name for Perl, Python, Ruby, etc. 01:23:26 ah right 01:23:43 pikhq: what other language category is there? functional languages? 01:24:37 the other languages I can think of generally aren't particularly fast 01:25:26 https://en.wikipedia.org/wiki/Template:Programming_paradigms *cough* 01:25:55 There's more programming language *categories* than you think there are languages, it sounds like. :) 01:26:16 who's gregor? 01:26:42 izabera: Gregor Richards, one of the channel members who's not been that active of late. 01:26:54 He's still here though 01:26:58 Gregor: Isn't that right? 01:27:05 pikhq : that list is a split by paradigm, not by speed grade 01:27:34 mad: C++ ain't exactly "fast" in idiomatic use... 01:27:57 I mean, sure, you can write fast C++, but once you're using the STL you've abandoned all hope. 01:28:06 izabera: Gregor's most famous for writing EgoBot and HackEgo 01:28:12 `? Gregor 01:28:17 pikhq : not if you're using STL right 01:28:20 I thought he was most famous for the hats. 01:28:24 Gregor took forty cakes. He took 40 cakes. That's as many as four tens. And that's terrible. 01:28:28 oh, he wrote lagbot 01:28:30 neato 01:28:39 it wasn't always laggy 01:28:43 ie basically as a replacement for arrays [] except it manages the size 01:28:46 but then he got cheap 01:28:58 Also, I wouldn't take game developers as a good example of "how to write programs"... 01:29:09 oerjan: if you want a cheap bot, see glogbackup (which is also Gregor's) 01:29:59 Unmaintainable piles of shit that are written by the sort of people who are willing to accept 80 hour workweeks are par for the course. 01:30:37 that's a rant i've never heard 01:31:02 what's the problem with working too many hours a week? 01:31:04 -!- Sgeo has quit (Ping timeout: 260 seconds). 01:31:42 Um, humans are kinda bad at being productive that long. Especially at mentally intense tasks. 01:32:17 if garbage collectors are ruled out you're left with, er, basically: C, C++, assembler, delphi, rust, and objective C (and I guess cobol and ada) 01:32:24 as far as I can think of 01:32:38 ... Have you never even heard of Forth? 01:32:44 ok and forth 01:32:53 also fortran, i think 01:32:56 Or Tcl, for that matter? 01:32:59 ok and fortran 01:33:06 * izabera adds bash to the list of non-garbage-collected languages 01:33:08 Hell, and Python. 01:33:29 how is python not garbage collected 01:33:35 Python is reference counted. 01:33:50 also it's dynamic typed which is a much larger speed disadvantage 01:33:53 reference counters fall into a similar category to garbage collectors to me 01:34:00 they have noticeable overhead, often more 01:34:12 the difference being that it's predictable overhead that always happens in the same places 01:34:12 ais523: They're automatic memory management, but GC is a different technique. 01:34:18 pikhq: yes 01:34:27 they are not the same, but they have similar effects on a program 01:34:28 Ah, "similar". 01:34:28 ""The standard C implementation of Python uses reference counting to detect inaccessible objects, and a separate mechanism to collect reference cycles, periodically executing a cycle detection algorithm which looks for inaccessible cycles and deletes the objects involved."" 01:34:33 Yes, not the same but similar. 01:35:37 reference counting doesn't cause 100ms pauses in your app like the java GC does 01:36:39 Does Java not have a way of using a more interactive-use-appropriate GC? 01:36:56 you can make hints to Java about when a good time to GC would be 01:37:10 ais523 : in a video game, there's never a good time 01:37:14 but a) it doesn't have to respect them, b) you can't delay GC, only make it happen earlier (and hopefully not again for a while) 01:37:18 mad: loading screens 01:37:23 great time to GC 01:37:31 tswett: Hi yet? 01:37:38 if you have the memory (and sometimes you do, but not always), you can just leak until the next loading screen and catch all the memory up there 01:37:38 if your game has loading screens, yes 01:38:00 very few games don't 01:38:08 mad: Good luck 01:38:14 although in many, they're disguised, or short enough that you don't really register them 01:38:16 Hey there. 01:38:18 mad: Making a loading screen-free game, that is 01:38:22 It happens you caught me at a bad time. 01:38:22 tswett: Yay! 01:38:25 Oh 01:38:26 I have to go to bed now. 01:38:29 s/yay// 01:38:31 even in the disguised/short ones, a 100ms pause isn't noticeable 01:38:32 i 01:38:32 Night, everyone. 01:38:34 Also, if you have a *good enough* GC, you should be able to only pause for short periods of time between frames. 01:39:45 it would still be better to have only ref counting and no GC in that kind of programs though 01:40:19 mad: so if the root of a structure gets freed 01:40:28 you then have a pause while the rest of the structure gets freed recursively 01:40:32 refcounting doesn't remove pauses 01:40:39 simply makes it easier to predict when they'll happen 01:40:56 but (1) other threads keep going 01:41:14 as opposed to GC which has a "stop the world" phase where it pauses every thread 01:41:35 not necessarily, concurrent GCs exist 01:41:39 so chances are the pause will happen on your data loading thread (not your gfx thread) 01:41:39 That's only true of a subset of GCs. 01:42:04 even concurrent GCs do have a "stop the world" phase, it's just much shorter 01:42:13 (if what I've read is correct) 01:42:23 By the same notion, so does malloc because malloc has a mutex. 01:42:59 pikhq: I've managed to deadlock on that mutex before now :-( 01:43:25 let's just say, SDL's situation with timing and concurrency is so bad I've decided to take a look at SFML to see if it's any better 01:43:46 SDL is... not a well-designed library. 01:44:48 yeah SDL is way less good than it should've been 01:46:26 pygame makes SDL sane. 01:47:18 boily: does it prevent it polling every 1ms? 01:48:28 IIRC, I don't think so. 01:48:52 the other thing is that refcounting doesn't have heap compaction 01:48:56 which is a good thing 01:49:22 It's kinda a wash. 01:49:36 (and orthogonal to refcounting, really) 01:50:12 Heap compaction costs when it happens, but means the allocator can spend less time in allocation. 01:50:47 heap compaction on 300megs of data isn't pretty 01:51:04 I've forgotten how to count that low. 01:52:20 like, it's all fine if it's server software and it doesn't matter if the whole app stops for half a second 01:52:33 ... No, it isn't. 01:52:37 then, yes, by all means use java and C# and python and whatnot 01:53:43 If a service pauses for half a second I get paged. 01:58:14 pikhq: If an individual server has a GC pause of 500ms? 01:58:40 shachaf: I exaggerate. 01:58:55 shachaf: But we *do* have SLAs for response time to requests... 02:00:01 I shouldn't talk about details in here anyway. 02:00:27 Hmm, I think I know how to set off pikhq's pager. 02:01:04 Joke's on you, I'm not on call right now 02:01:36 But is your pager thing actually turned off? 02:01:48 Well, no... 02:06:00 -!- andrew_ has joined. 02:06:35 -!- Sgeo has joined. 02:14:48 <\oren\> aaah 02:14:49 -!- hppavilion[1] has quit (Ping timeout: 244 seconds). 02:14:54 <\oren\> it's elif 02:15:12 <\oren\> why can't it just also allow else if and elsif? 02:15:27 in python? 02:15:36 <\oren\> yah 02:15:45 probably elif is much used so it is easier to write in that way? 02:15:47 not really sure. 02:16:24 <\oren\> true but it should allow elif, else if and elsif as alternatives 02:16:38 "one way to do that" :p 02:16:49 <\oren\> argh 02:16:53 you want perlthon 02:17:10 * izabera googled it and it's an actual thing 02:17:23 he\\oren\! 02:18:49 <\oren\> hi 02:18:54 -!- mysanthrop has joined. 02:19:27 he hates you 02:19:53 :o 02:20:05 <\oren\> who? 02:20:27 you 02:20:37 All right, and whom? 02:20:40 you 02:20:53 Well, that's rude 02:21:01 yeah 02:21:48 <\oren\> izabera: why do you think I hate him? 02:21:52 <\oren\> `relcome mysanthrop 02:22:12 ​mysanthrop: Welcome to the international hub for esoteric programming language design and deployment! For more information, check out our wiki: . (For the other kind of esoterica, try #esoteric on EFnet or DALnet.) 02:22:24 Needs more rainbows 02:23:07 I wonder if I can get mutt working on a jailbroken iPhone 02:23:41 why 02:23:48 -!- j-bot has quit (Ping timeout: 248 seconds). 02:23:48 -!- myname has quit (Ping timeout: 248 seconds). 02:23:48 -!- Alcest has quit (Ping timeout: 248 seconds). 02:23:49 -!- MoALTz has quit (Ping timeout: 248 seconds). 02:23:49 -!- nisstyre_ has quit (Ping timeout: 248 seconds). 02:23:57 Consistent mail experience? 02:23:59 unless your mutt has a much better interface than mine 02:24:27 <\oren\> I just use a ssh app and use alpine 02:24:32 how do C programmers live without std::vector and std::string 02:24:32 you bought an iphone, you clearly care about eye candy 02:24:53 I technically lease an iPhone 02:26:40 <\oren\> mad: i have a bunch of poorly written functions I copy from one project to the next over and over 02:27:06 mad: Easily. 02:27:26 ... Or poorly, if you go by the average results. :P 02:27:32 reallocate arrays every time they change size? 02:27:50 Why would you do that if the std::vector implementation doesn't? 02:28:10 It's not like it's rocket science to have a struct that has "size" and "capacity" separately. 02:28:25 fizzie : true but then you might as well use std::vector 02:28:40 which does that and it can't leak 02:29:25 <\oren\> my functions resize them when they get to each power of two 02:29:57 \oren\ : that's exactly what std::vector does 02:30:09 I don't think array resizing is a major source of memory leaks. 02:30:25 I read this thing that was arguing that powers of two is one of the worst choices you could make. 02:30:26 "new" is your friend if you want to leak memory in C++. ("can't" really is too strong) 02:30:43 Now, powers of three, though. That's the future 02:30:47 well, the point is that std::vector replaces stuff * 02:30:55 stuff * can leak, of course 02:31:06 std::vector can't 02:31:07 C++ does have a couple of resource management idioms that C doesn't support, but it's far from golden anyway 02:31:29 <\oren\> I like std::vector. I *HATE* std::ios[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Ctream 02:31:41 <\oren\> wtf just happends 02:31:51 Maybe it was https://github.com/facebook/folly/blob/master/folly/docs/FBVector.md 02:31:53 iostream is a big raised middle finger to STL 02:32:22 I cannot really understand how can it be possible to have STL and iostream in the *same* standard 02:32:22 int-e : C doesn't have std::vector, that's the real one that's missing and it's a major, major gaping hole 02:32:53 mad: Anyways, frankly if you think that std::vector is your solution to memory management problems you are too unaware of the problems there are to solve to be having this discussion. 02:32:56 lifthrasiir : 80% of the time I simply ignore iostream but use STL anyways 02:32:59 "always non-negative, almost always measurable, frequently significant, sometimes dramatic, and occasionally spectacular" 02:33:17 pikhq : if you need a special case then STL won't cut it true 02:33:54 but in my experience, "variable sized array" is 10x more common than any other memory structure and its omission from C hurts hard 02:33:59 mad: yeah. STL is (within its design constraint) well-designed library, while iostream is an epic fail 02:34:29 It's also by far the easiest data structure to implement, so... 02:34:32 for example, locale is a shit 02:34:36 <\oren\> well, realloc() is basically the equivalent for C 02:34:49 <\oren\> there's no operator renew 02:35:10 pikhq : yeah but you reimplement it so often that it should be a language feature really 02:35:12 anyone who tried to write a new locale with iostream (to be exact, std::codecvt etc.) will understand that 02:35:27 Sure, it'd be a nice addition to libc. 02:35:51 there are, like, 4 features I care about in C++ 02:36:29 std::vector, std::string, std::map, and putting functions in structs/classes for convenience (ie less typing) 02:36:47 That's the same for everyone. Unfortunately, it's a different 4 for each person, and C++ has enough features that each individual human being gets their own set of 4 features. 02:36:53 std::vector is not just a "nice addition", it's a major feature 02:38:00 <\oren\> I just have a function for appending to an array 02:38:03 `? mad 02:38:06 (I suspect that C++ releases new versions to keep up with global population growth) 02:38:08 This wisdom entry was censored for being too accurate. 02:38:27 pikhq : that is true 02:39:08 <\oren\> apparr(char**array,int*size,char*part,int*partz); 02:39:29 https://developer.gnome.org/glib/stable/glib-Arrays.html 02:39:31 realloc() isn't bad 02:39:36 int-e: the mad that was censored isn't the mad that is in the chännel hth. 02:40:03 Ugh, glib. glib makes C++ look *angelic* in commparison. 02:40:19 <\oren\> my function does realloc iff size would increase through a power of two 02:41:04 \oren\ : yeah. I use std::vector for exactly that except with less potential mistakes 02:41:06 <\oren\> I don't remember why partz is passed by pointer 02:41:43 he\\oren\. more pointers, more stars, more sparkliness. 02:41:52 pointers are evil 02:42:06 <\oren\> computers are evil 02:42:14 pikhq: sure but if the objection is that one has to reimplement resizable arrays all the time, that's one of the counterarguments that come to my mind 02:42:26 except pointers that are essentially referrences, those are okay 02:42:27 int-e: Fair enough. :) 02:42:46 <\oren\> mad: isn't that all pointers? 02:42:53 \oren\: I see that you are still fonting ^^ 02:43:06 (nice fraktur btw.) 02:43:09 <\oren\> pointers and references are different words for the same thing 02:43:27 \oren\ : well, basically if its pointing to data owned by some other structure, it's okay 02:44:05 \oren\ : if it's pointing to a memory allocation and you get a leak if the pointer gets overwritten, then it's bad 02:44:36 <\oren\> how's that different from references? 02:45:38 well, c++ references are typically used in function declarations and they refer to some object 02:46:05 you can't use c++ references to do allocation/deallocation so by definition they generally can't be evil 02:46:23 "can't", again. 02:46:28 generally 02:46:29 it's C++ we're talking about. everything can be alignment-shifted. 02:46:40 boily : and then it'll be slow 02:46:51 but that's a rare case 02:46:51 <\oren\> well then what good are they? you need some way to refer to non-stack memory... 02:47:09 If every programmer were as disciplined as that, we'd already be out of work 02:47:13 I bet delete &ref; is valid 02:47:16 -!- nisstyre_ has joined. 02:47:55 \oren\ : easy, when you have a function that returns 2 things, one can be returned as a return value but the other has to be a pointer or reference argument and then the called function will write in it 02:48:05 that's what references are for 02:48:17 they're essentially interchangeable with pointers 02:48:45 <\oren\> that's what I said, they're just a pointer. 02:48:54 internally, c++ references are pointers yes 02:49:02 time to have unevil, functional sleep. 'night all! 02:49:03 basically they're just syntactic sugar 02:49:09 -!- boily has quit (Quit: SELFREFERENTIAL CHICKEN). 02:49:27 int-e : C++ doesn't guard against messing things up badly :D 02:49:40 <\oren\> specifically, a int& is the same as a int*const, but with syntax sugar 02:50:10 <\oren\> allowing you to code as if it's a int 02:50:49 \oren\: and it's much harder to pass in NULL. 02:50:52 basically if there's a way to code something with malloc/free/new/delete, and a way that doesn't involve these, I always go for way #2 02:51:59 If you're not writing a custom malloc implementation every time, are you really doing your job? 02:52:22 the standard malloc goes through the bucket allocator 02:52:31 prooftechnique: I have a word for those people, but it's inappropriate for polite conversation. 02:52:36 for typical uses it does a pretty good job 02:52:44 prooftechnique: If you're writing a custom malloc implementation every time, are you really doing your job? 02:52:52 <\oren\> well at my work we use our own resizable array class 02:53:07 <\oren\> instead of std::vector 02:53:13 how come? 02:53:29 <\oren\> because apparently std::vector doesn't play well with threads or somehting 02:53:31 The same is true of my work, but at this point I'm a little surprised we don't just have our own implementation of the STL... 02:54:12 \oren\ : depends on when it changes size :D 02:54:15 the NIH is strong 02:54:21 `? NIH 02:54:23 NIH was /not/ invented by Taneb. 02:54:43 `culprits wisdom/NIH 02:54:50 if you have a size change at the same time another thread looks or writes in the std::vector then you have a problem yes 02:54:51 No output. 02:54:53 int-e: That's practically the Google way. 02:55:00 `culprits wisdom/nih 02:55:04 int-e 02:55:07 I'm a little sad that the CPPGM is already running. It seems like it'd be a fun thing to fail at 02:55:11 meh, I forgot. 02:55:25 -!- ais523 has quit. 02:55:28 <\oren\> int-e: well half our codebase is in an in-house language instead of c++, and the compile porcess uses another in-house language instead of makefiles, so you know.... 02:55:30 pikhq: The Google way isn't exactly NIH. They have their own variant of it. 02:55:40 shachaf: :D 02:57:06 \oren\ : basically whenever some std::vector can change size, it needs to be 100% mutexed accessible by only 1 thread, or else you're in trouble 02:57:18 the rest of the time it's the same as a C array 02:58:39 supposedly copy-on-write containers work well with threading 02:59:06 <\oren\> i think that's what we have NIHed 03:00:06 the other case I've heard is code that had to work on stuff like the nintendo DS 03:00:09 <\oren\> I haven't looked into the details since the interface is almost exaclt the same as std::vector 03:00:23 which if I'm not mistaken had a broken STL or something like that 03:00:48 <\oren\> this has to work on coffeemachines and things 03:00:55 my brother's company has a NIH std::vector equivalent because of that 03:02:39 for strings, ironically std::string basically complements char *> 03:03:09 char * strings are cool except that you basically can't store them, std::string fixes just exactly that 03:05:19 <\oren\> can't store them where? 03:05:37 well, char * has no storage 03:05:59 <\oren\> what the heck does that mean? 03:06:17 suppose you have to save some text data inside a struct 03:06:33 your options are like 03:07:28 char text[99]; // + 1 million runtime checks and prayer and hope that it never goes over 03:08:41 char *text; // and then make sure it's set to null in every single constructor and make sure it's deleted in the destructor and then checks that it's not null every time you read it and malloc/realloc if it ever changes size 03:09:12 std::string text; 03:10:32 it's just that option #3 has way less common failure modes than option #1 and option #2 03:10:48 <\oren\> std::string could be replaced with a bunch of funtions that take char* and handle everything you just said. 03:11:02 \oren\ : yes that's option #2 03:11:08 char * in the class 03:11:23 <\oren\> but the point is I already have such functions 03:11:51 `addquote pikhq: The Google way isn't exactly NIH. They have their own variant of it. 03:11:58 1270) pikhq: The Google way isn't exactly NIH. They have their own variant of it. 03:12:31 \oren\ : and you never forget to put them in constructors, destructors, and to put checks against null? 03:13:12 <\oren\> I don't have constructors or destructors, and all my string handling functions check for null 03:13:41 <\oren\> (becuase I'm writing in C, which doesn't have constructors or destructors) 03:13:59 \oren\ : well, when mallocating and freeing structs of that type then 03:14:11 of the type that contains the char * 03:14:37 <\oren\> well, since my usual first step is somthing like: 03:15:10 <\oren\> struct foo *f = newfoo(); 03:15:19 <\oren\> then , inside newfoo: 03:16:08 <\oren\> struct foo *f = malloc(sizeof(struct foo)); *f = nullfoo; return f 03:16:30 -!- oerjan has quit (Quit: Late(r)). 03:16:43 <\oren\> that doesn't happen, becuase I have a prototype for all foo objects (nullfoo) 03:17:08 and you have a deletefoo() matching with every newfoo() ? 03:17:13 <\oren\> yes 03:18:00 yeah i guess that works 03:19:18 <\oren\> I even have some functions that can delete an array, taking a pointer to a delete function to be called on each element 03:19:26 <\oren\> and things like that 03:19:47 makes sense 03:20:13 <\oren\> it's an obvious extension of the precedent set by qsort and bsearch 03:20:35 <\oren\> they just didn't bother with it in the C stdlib 03:20:57 It's kindof the reverse of my coding style (which could be summarized as "avoid malloc/free unless there's really no other option") but I guess it's sorta functional 03:21:29 <\oren\> it's what you do if you're writing C and not C++ 03:21:47 which makes sense if you're doing embedded coding yes 03:25:41 -!- nortti_ has joined. 03:25:42 -!- int-e_ has joined. 03:26:20 -!- puck1pedia has joined. 03:26:27 -!- lambda-calc has joined. 03:26:27 -!- lambda-11235 has quit (Ping timeout: 260 seconds). 03:26:28 -!- aloril_ has quit (Ping timeout: 260 seconds). 03:26:29 -!- puckipedia has quit (Ping timeout: 260 seconds). 03:26:29 -!- Gregor has quit (Ping timeout: 260 seconds). 03:26:30 -!- nortti has quit (Ping timeout: 260 seconds). 03:26:30 -!- atehwa_ has quit (Ping timeout: 260 seconds). 03:26:30 -!- catern has quit (Ping timeout: 260 seconds). 03:26:30 -!- quintopia has quit (Ping timeout: 260 seconds). 03:26:30 -!- int-e has quit (Ping timeout: 260 seconds). 03:26:52 -!- Gregor has joined. 03:27:35 -!- bender|_ has joined. 03:27:40 -!- puck1pedia has changed nick to puckipedia. 03:28:06 -!- aloril_ has joined. 03:31:06 -!- atehwa has joined. 03:31:28 -!- ais523 has joined. 03:31:29 -!- ais523 has quit (Remote host closed the connection). 03:31:30 -!- j-bot has joined. 03:37:44 -!- quintopia has joined. 03:43:29 -!- hppavilion[1] has joined. 03:43:36 -!- catern has joined. 04:02:17 -!- hppavilion[1] has quit (Ping timeout: 244 seconds). 04:12:33 -!- hppavilion[1] has joined. 04:15:07 -!- ais523 has joined. 04:15:49 OK, so SFML uses a very thread-centric model 04:16:03 e.g. there's no way to inject user-defined events, no way to do timers, etc. 04:16:51 however, it /also/ doesn't define any safe way to communicate between threads, other than mutexes, and I don't think you can form the equivalent of a select() out of mutexes 04:17:08 * ais523 is in #esoteric, and thus takes questions like "can you create a message queue out of nothing but mutexes" seriously 04:18:26 so the question is, what are the sensible cross-platform ways to merge events coming in from multiple threads, when your threading primitives suck? 04:20:24 note: something you /could/ do entirely within SFML is to create a TCP listening socket and use that, but a) this uses up a global system resource (open ports), b) there's no way to restrict connections to localhost so it's more than a little insecure 04:20:35 (no way within SFML's API, that is; you can obviously do it in TCP) 04:21:34 ais523: define "out of nothing but mutexes" 04:22:04 are we talking about communication via try_lock()? 04:22:09 the only thread-safe blocking primitive that you have available is the mutex lock, which will block if another thread has the mutex locked 04:22:32 the problem isn't transferring the data, because you can do that via shared memory 04:22:37 (which is the default for threading) 04:22:52 the problem is blocking until there's a message ready to receive 04:22:55 ahhh 04:23:11 and AFAICT, the problem is that you can only try to lock one mutex at a time, a specific thread holds it 04:23:28 and so you're blocked until that specific thread gives you permission 04:23:32 (also you can't do anything meanwhile) 04:25:04 it's basically the opposite situation to the situation for which mutexes were designed; we don't have one process holding the lock and many waiting on it, we have many processes holding the lock and one waiting on one of them to release it 04:25:09 s/process/thread/ 04:26:07 isn't SFML a multimedia library? 04:26:32 mad: yes 04:26:46 however this means it contains an event loop 04:27:09 and its event loop uses a "use different threads for different sorts of events" model (implicitly in that it doesn't support timers, has sockets as a separate thing from windows, etc.) 04:27:18 it also supplies threads, and mutexes 04:27:32 but this isn't enough to be able to communicate between threads without polling AFAICT 04:28:16 ais523: yes, I don't think it's possible either 04:28:31 I'm not familiar with how it's done in the networking world 04:28:36 mad: polling 04:28:43 under the hood, anyway 04:29:10 so what I want is either a solution a) inside SFML using other primitives it has (IMO impossible), or b) using cross-platform primitives that are widely implemented 04:29:36 I could use pthreads, I guess; however I don't know how that works on Windows/Strawberry 04:29:59 and/or how well it plays with SFML (which after all, has its own threading abstraction) 04:30:07 wait, what's the thing you can't do with mutexes? 04:30:27 mad: block until something happens on any of multiple threads 04:30:51 ais523: semaphores 04:31:10 coppro: semaphores would work fine, but SFML doesn't supply them as a primitive 04:31:18 ais523: most platforms do though 04:31:19 ais523 : oh I see 04:31:23 hence b) 04:31:31 ais523 : ...what's the application for that? 04:31:32 hard to find something more primitive 04:31:37 right 04:32:04 mad: the situation is that I am writing a library (libuncursed; coppro's worked on it in the past too) that presents an event-loop interface to programs using it 04:32:16 and abstracts over a number of different backends (currently, POSIX, Windows, and SDL) 04:32:22 definitely semaphores 04:32:33 hmm, how about 04:33:10 event handling thread blocks on one mutex 04:33:11 there are others that could be sensible, too (e.g. X, GDI) 04:33:34 any of the multiple other threads can unlock that mutex 04:33:42 you can't unlock a mutex unless you hold it, surely 04:33:48 * ais523 checks to see if SFML have messed this up 04:34:20 hmm, it doesn't say that you can't unlock a mutex while another thread holds it 04:34:43 perhaps it's worth experimenting with 04:35:00 seems vulnerable to race conditions but that maybe isn't insoluble 04:35:06 well 04:35:17 (e.g. using a separate mutex to protect the signalling one) 04:35:21 that mutex would only be used to pause the event handling loop 04:36:11 so, let's see 04:36:13 we have two mutexes 04:36:18 each particular ressource would have its own mutex so that the owner thread of that ressource would unlock its ressource, then unlock the event handling thread's mutex 04:36:22 oh, bleh 04:36:26 these mutexes are recursive 04:36:57 the obvious algorithm, assuming you can unlock someone else's mutex, ends with the event handling thread intentionally deadlocking on itself 04:37:02 but you can't do that with a recursive mutex 04:37:15 so we'll have to create a separate thread purely to deadlock it 04:38:14 so three locks (A, B, C), two "special" threads (event and deadlock), N generic threads 04:38:52 netutral state is A locked by deadlock, event waiting on it; B locked by event, deadlock waiting on it; C unlocked 04:39:18 when a generic thread wants to send a message, it locks C, pushes the message on a queue, unlocks A if the queue was empty (this is protected by C), unlocks C 04:40:35 -!- XorSwap has joined. 04:41:26 when event gets past the deadlock, it locks C, and handles messages from the queue until it's empty; then, hmm 04:41:31 SFML doesn't even have a trylock 04:41:42 what sort of use is having a general event handling thread like that for? 04:41:47 so how do we get back into the deadlocked state? 04:42:33 mad: say you want to wait for a key to be pressed, or for 1 second to pass 04:42:44 and the timer thread and keypress handling thread have to be different for some reason 04:43:53 that's a bit of a weird test case 04:43:58 your two options are: run the entire logic of the program on whichever thread happened to be the one that received the event (key/timer); or send all the messages to the same thread 04:44:41 it's not a weird test case at all, it's a common enough operation that, say, both ncurses and uncursed provide a function that does exactly that (although ofc the timeout's configurable) 04:44:58 or for another example, say you want to wait for either a keypress, or receiving a network packet 04:45:44 multimedia apps often just keep processing video frames and handke keypresses on next frame 04:46:08 that's a common way to write IRC clients (although in this case the responses to a keypress and to a network packet are different enough that you can run them on different threads without too much effort, that isn't something you should have to do) 04:47:15 mad: that's terrible for battery life, though 04:47:22 you want to be able to block until something happens, rather than having to poll 04:47:31 (in fact it's the reason I wanted to move away from SDL in the first place) 04:48:13 I guess it depends on if you have the case where your app does nothing when there's no input 04:48:54 which I guess is sensible for an irc client but not a game 04:49:18 mad: turn-based games often do nothing when there's no input 04:49:31 unless they have audio 04:49:48 Different thread 04:50:05 audio is one of those things that can safely be run in an independent thread, yes 04:50:16 or interrupt-to-interrupt, on less powerful systems 04:50:25 yeah but that means you have at least one always active thread 04:50:26 this is why it's often the only thing that works when the rest of the game crashes 04:50:43 mad: no? audio thread blocks until the sample buffer drains, typically 04:50:45 which means that you might as well do polling on your event handler thread 04:50:52 there's only so much the audio thread can do before blocking 04:51:02 ais523 : yes, which happens at least 50 times per second 04:51:05 you're not running in a busy loop calculating samples 04:51:12 <\oren\> do you have any primitive atomics on shared memory? 04:51:37 also 50fps is still slower than a typical video framerate 04:51:47 \oren\: std::atomic would work in this case, I think 04:51:51 given that it's C__ 04:51:51 <\oren\> (although last time I touched that stuff I got terrible radiation burns) 04:51:53 * C++ 04:53:07 depends on what you mean by "atomic" 04:53:56 mad: a variable that supports operations that cannot be interfered with by other threads 04:54:01 for typical cases it's really the operations you do on your primitive that are atomic, I guess... and yeah I guess std::atomic does this for you 04:54:05 there are a range of atomic operations, some more useful than others 04:54:18 test-and-set is a common example of a primitive that's powerful enough to build anything else 04:54:36 (set-to-specific-value, that is, not set-to-1) 04:55:03 yeah, the equivalent of lock cmpxchg? :D 04:55:04 <\oren\> yeah I think we used a swap operation in my OS class 04:55:29 <\oren\> or maybe a compare and swap? 04:55:52 Surely CAS. Just swap isn't sufficiently general I don't think. 04:56:13 pikhq: IIRC pure swap is sufficiently general, but much more complex to use 04:56:23 Ah, okay. 04:56:26 I think it needs the compare to handle the case where some other thread has changed the value 04:56:33 between the read and the write 04:56:37 pikhq: you can construct a boolean test-and-set out of a swap by swapping in a 0 or 1 04:56:47 swapped-out value is the test, swapped-in value is the set 04:56:54 And you don't find hardware without CAS really, so it's not worth the effort. 04:56:55 <\oren\> yeah we used just swap 04:57:24 <\oren\> the OS ran on some sort of virtual machine 04:57:35 you basically use the test-and-set as a mutex to guard a non-atomic operation on shared memory 04:57:43 I think you might have to spin until the value is not set any more, though 04:58:04 how does swap guarantee that some other thread hasn't changed the value after your read but before your write? 04:58:05 <\oren\> yup, that's what we did, I remeber it now 04:58:29 mad: atomic swap guarantees that because atomic 04:58:47 <\oren\> i think maybe it just freezes the other processors? who knows 04:58:49 hmm, so SFML on Linux, at least, uses pthreads 04:59:09 \oren\: it actually uses quite a complex locking mechanism internally 04:59:22 the processors will block on the lock on the memory address if they try to access the same address 04:59:29 there might also be some memory barriers involved 04:59:47 <\oren\> well, in my course we were on a vitual machine, so who knows 04:59:52 ais523 : but you can't prevent the swap if the value has changed 05:00:00 mad: which value? 05:00:10 suppose you're trying to do an atomic increment 05:00:14 value is 0 05:00:22 mad: you don't do the swap on the value you're incrementing 05:00:26 you do it on a second, guard value 05:00:37 which is 1 while in the middle of an increment, and 0 the rest of the time 05:00:43 to increment, first you swap the guard value with 1 05:00:48 <\oren\> maybe cmpxchg is better for real processors because you don't need so much locking 05:01:07 cmpxchg lets you have atomics without having a second guard value like that. 05:01:13 if you swapped a 0 out of it, then you do the increment, and swap a 0 back in (and will get a 1 after your swap unless shenanigans) 05:01:16 \oren\ : cmpxchg lets you do atomic increment without a guard value yeah 05:01:30 if you swapped a 1 out of it, then you try again; you swapped a 1 with a 1 so you didn't interfere with the process that's currently doing the increment 05:01:49 <\oren\> so they made us do it with swap only because it's harder 05:01:52 with compare-and-swap, what you do is you first (nonatomically) read the value, say it's x 05:02:01 then you swap in x+1 if the current value is x 05:02:12 if you swapped an x out, everything is fine, you're done 05:02:13 ais523 : but what if you have a 1 and then a third thread comes in? then the third thread will see a false 0 05:02:34 if you didn't, then try again, you didn't change anything as you did a read and a failed-CAS 05:02:40 mad: no it won't 05:03:16 oh 05:03:22 wait I guess I see 05:03:27 here's my program: /*x*/ while (swap(guard, 1)); /*y*/ val++; /*z*/ swap(guard, 0) 05:03:50 yeah that works if the cpu doesn't reorder memory writes 05:03:59 yep 05:04:03 and reads 05:04:10 and an atomic swap is normally assumed to contain appropriate memory barriers 05:04:18 to protect anything that's ordered relative to it 05:04:29 which means it should work on x86 but not necessarily other platforms 05:04:34 (either in the processor architecture itself, or because it's a wrapper for the instruction + the barrier) 05:04:56 mad: The underlying instruction, sure, but any real-world use would have the appropriate memory barrier. 05:04:56 ais523 : as opposed to cmpxchg which.... doesn't really need barriers I think? 05:05:12 Because it's not at all helpful if it's not a synchronization primitive. :) 05:05:43 mad: well it depends on what the memory sequencing properties of the compare-and-swap are 05:05:53 it needs to contain at least a barrier on the things it's swapping 05:06:09 but really you need them in order to avoid time paradoxes 05:06:16 well, the point of compare-and-swap is to have memory order guarantees against some other thread also doing compare-and-swap on the same value 05:06:34 so presumably it has at least some kind of barrier against itself 05:07:05 That's the "lock" prefix on x86. 05:07:18 right 05:07:21 Without it, cmpxchg isn't atomic WRT other threads. :) 05:07:25 -!- lleu has quit (Quit: That's what she said). 05:07:28 something that happens in Verity at the moment (assignment in Verity is atomic but has no barrier): new x := 0 in new y := 0 in {{x := 1; y := 2} || {y := 1; x := 2}}; print(!x); print(!y) 05:07:41 can print 1 1 even if you had a barrier betwen the parallel assignment and the prints 05:08:25 this is because there's no barrier between the assignments to x and to y, and in particular, the four assignments can happen /literally/ simultaneously, in which case it's unspecified which ones win 05:08:46 that seems normal? 05:09:03 Yes, but it's weird to people used to x86's memory model. 05:09:15 mad: well there isn't any way to interleave {x := 1; y := 2} and {y := 1; x := 2} that leaves both variables set to 1 05:09:28 well 05:09:33 x := 1 happens 05:09:43 oh 05:10:04 Reordering is fun. 05:10:11 pikhq: it's not even reordering 05:10:15 the print() stuff happens on the 2nd thread? 05:10:15 it's just simultaneity 05:10:23 mad: || is a thread split + join 05:10:24 after the x:=2 05:10:41 where's the join? 05:10:49 i.e. I temporarily fork into two threads, one does {x := 1; y := 2} and the other does {y := 1; x := 2} 05:10:51 then the threads join 05:10:57 || is a fork + join operator 05:11:15 I guess you're right, that can't happen in the x86 memory model 05:11:23 unless the compiler reorders the writes 05:11:35 (which afaik it totally can) 05:11:38 in Verity, the compiler doesn't reorder the writes, it's just that all four happen at the exact same time 05:11:58 mad: right, in gcc you'd need a compiler barrier 05:12:02 The x86 memory model is one of the stronger ones out there. 05:12:07 like "asm volatile ();" 05:12:17 to prevent gcc reversing the order of the assignments to x and to y 05:12:23 pikhq : they probably had no choice :D 05:12:31 considering all the apps out there 05:12:40 well most programs out there at the time were single-threaded 05:12:41 ais523: I'm not sure if that's actually a full compiler barrier. 05:12:47 pikhq: err, right 05:12:51 asm volatile (:::"memory") 05:12:53 I tend to use asm volatile("" ::: "memory"); 05:12:59 Yeah. 05:13:45 there's probably less compiler memory op reordering on x86 though 05:13:53 due to the structure of the instruction set 05:13:56 mad: It's actually a fairly arbitrary choice, given that it would *only* effect programs and OSes that were aware of multiprocessing, and when introduced this was very close to 0. 05:15:04 I remember that when real multiprocessor systems started to happen there were a few apps that started failing 05:15:12 not that many tho 05:15:56 hmm, Verity's || operator was called , in Algol 05:16:02 Yes, they'd be ones that used threads incorrectly. 05:16:11 Verity is an Algol derivative, after all, so it's not surprising it has one 05:16:28 is {x := 1; y := 2} implicitly unordered? 05:16:28 however, it's surprising that it isn't seen more often in modern languages 05:16:32 Hence why it would be not that many -- threading is a bit niche without multiprocessor systems. 05:16:33 mad: no, it's ordered 05:16:48 assignment to x happens before, or simultaneously with, assignment to y 05:17:08 'or simultaneously with' 05:17:27 a write to a variable cannot happen simultaneously with a write or read that comes earlier 05:17:41 and if a write and read happens simultaneously you get the new value 05:17:45 there, those are Verity's timing rules 05:17:54 ais523: Huh, that's actually kinda-sorta related to C's , introducing a sequence point, then, isn't it? 05:17:58 (by simultaneously, I mean on the same clock edge) 05:18:08 Erm, no, no it isn't. 05:18:28 pikhq: for if you want even more detail on how it works: 05:18:40 it's call-by-name so naming a variable can be seen a bit like a function call 05:18:48 and the same call can't return twice on the same cycle 05:19:07 however, for "simple" reads of variables the call can be optimized out 05:19:35 (it just looks at the bits in memory directly) 05:20:10 if all read/writes in a group are to different variables, they can happen all at the same time? 05:20:17 yes 05:20:29 then I guess they can be reordered no? :D 05:20:38 "the same call can't return twice on the same cycle" is the /only/ rule slowing the program down (apart from some corner cases wrt recursion) 05:20:44 mad: no, in x := 1; y := 2 05:20:49 the write to y can't happen before the write to x 05:20:57 it happens simultaneously (same clock cycle) or later 05:21:28 hm 05:21:30 (in this particular case it would be simultaneous because 2 is a constant, and thus there's nothing that could delay the write to y) 05:22:49 -!- bender|_ has changed nick to bender|. 05:22:57 -!- bender| has quit (Changing host). 05:22:57 -!- bender| has joined. 05:23:03 what if you had x := some_calculation; y := 2 05:23:04 ? 05:23:06 fwiw I consider this behaviour to potentially be a bug, but we've decided that for the time being at least it isn't (also it makes the program run faster, which is a good thing in the abstract) 05:23:21 mad: x and y would be assigned at the same time, when the calculation completed 05:23:39 meanwhile x := 2; y := some_calculation would assign x first, start the calculation that cycle, and assign y when the calculation completed 05:23:44 which might or might not be that cycle 05:23:52 what about 05:24:06 x := some_calculation; y := some_calculation 05:24:08 ? 05:24:48 how much of y's calculation can overlap with x's calculation? 05:24:55 runs the calculation, when it finishes delays one cycle; then assigns the result to x and starts running the calculation again, when it finishes assigns the result to y 05:25:32 note the "delays one cycle", this is automatically inserted to fulfil the rule that prevents the same block of code being used for two different purposes at the same time 05:25:49 what about 05:25:56 x := some_calculation; y := some_other_calculation 05:26:12 those could happen on the same cycle (unless the two calculations involve shared resources) 05:26:22 ah ok 05:26:23 I see 05:26:25 obviously, they only would if some_other_calcuation took zero cycles 05:26:40 as some_other_calculation doesn't start until some_calculation has finished 05:26:42 and to complete the set 05:26:50 x := some_calculation || y := some_other_calculation 05:27:04 would run both calculations in parallel regardless of what arguments they took or how long they took 05:27:57 is this designed for some specific piece of hardware? :D 05:29:19 pretty much the opposite: it designs specific pieces of hardware 05:29:29 to run the program you entered 05:29:37 e.g. via programming an FPGA 05:29:46 does it compile to verilog or something like that? 05:29:49 yep 05:29:52 VHDL, in this case 05:30:03 -!- lynn has joined. 05:30:27 and ofc the big advantage of designing hardware is that you can do things in parallel for free 05:30:36 so long as you don't need access to shared resources 05:31:17 mhm 05:31:18 one of my coworkers is looking into rewriting "x := a; y := b" as "x := a || y := b" if it can prove that the two programs always do the same thing 05:31:32 which would give a big efficiency gain without requiring people to place all the || in manually 05:31:51 that sounds like an aliasing resolution problem 05:32:10 -!- dingbat has quit (Quit: Connection closed for inactivity). 05:33:37 the standard approach to that is renaming but then it can parallelize the variables but not the name changes 05:33:38 well, much of our theoretical research has been in that direction 05:33:53 in particular, we statically know whether any two things can share or not 05:34:08 we don't have aliasing problems because Verity disallows storing anything other than integers in pointers 05:34:15 *integers in variables 05:34:21 (in particular, you can't store a pointer in a variable) 05:36:38 how does it know what to put in dram, block ram and in logic fabric registers? 05:39:20 arrays go in block ram, non-array variables in logic fabric (unless a large number of copies are required due to, e.g., them being local to a recursive function) 05:39:31 -!- lambda-calc has changed nick to lambda-11235. 05:39:32 dram isn't used by the language itself but you could write a library to access it 05:39:51 (assuming you're talking about external ram) 05:39:59 ("d" could expand in more than one way here) 05:45:10 -!- bender| has quit (Remote host closed the connection). 05:45:37 is "array[x] := n || array[y] := m" a compilation error? 05:46:39 yes but only because arrays use () for indexing rather than [] 05:47:02 although, interestingly, "array(x) := n || array(y) := m || array(z) := l" will give you a warning 05:47:24 the reason is that you can't do more than two writes to block RAM simultaneously in hardware 05:47:39 yeah obviously 05:47:45 and thus it has to add extra components to serialize the writes so that no more than two happen at a time 05:48:40 what mode does it use the bram's port in? read_before_write? 05:49:03 "warning: made 3 copies of an array's read/write ports" "info: at most two read/write ports can be supported efficiently" 05:49:10 and read-before-write, yes 05:49:27 not that it matters, all that changes is the behaviour in race conditions 05:50:37 that said, I'm currently working on implementing pipelining 05:51:00 in which case "array(x) := n || array(y) := m || array(z) := l" would do the writes on three consecutive cycles and thus you wouldn't get the warning 05:51:23 but then your throughput would go down :D 05:53:21 yes; this is something we might want to look at later 05:56:27 I've been really into trying to find an alternative to RISC/CISC/VLIW for practical CPUs 05:58:29 it's hard to balance between too static-scheduled (VLIW being simple but stalling easily etc) and too dynamic-scheduled (RISC/CISC start breaking down majorly over about 4 instructions per cycle) 05:59:13 as this is #esoteric, I'm wondering if there are any other alternatives 05:59:36 even if it's a pretty hppavilion[1] reaction to the problem 05:59:46 I have some interesting designs but nothing approaching the simplicity of RISC 06:00:21 what about a CPS processor? 06:00:34 i.e. "run this command, once it finishes running, do this other thing next" 06:00:45 although that's pretty similar to hyperthreading, really 06:01:04 it falls down on what exactly a "command" is :D 06:01:10 and there's a reason processors don't run entirely on hyperthreading 06:01:52 I thought hyperthreading was basically just a way to keep the cpu active when loads have fallen out of data cache and it's that or stalling 06:01:56 :D 06:02:29 -!- XorSwap has quit (Quit: Leaving). 06:02:38 or, in the case of sparc, a way of wiggling their way out of doing an out-of-order while keeping okay performance :D 06:03:32 ais523 : what runs in parallel in a CPS processor? 06:04:12 mad: I guess you can start multiple commands (well, opcodes) running at the same time 06:04:18 basically via the use of a fork opcode 06:04:42 the question is, do we also need a join, or do we just exit and run the code for its side effects? 06:04:58 how do you tell if the opcodes are truly independent or have dependencies? 06:06:01 -!- lynn has quit (Read error: Connection reset by peer). 06:06:35 the approach I've been looking at is extremely small "threads" 06:06:42 like, 3 instruction long for instance 06:07:24 you don't have to, you just run them whenever they become runnable 06:07:56 I guess that if you add join, this is basically just a case of an explicit dependency graph 06:08:08 if your commands do loads/stores on the same memory you need to know what happens 06:08:13 which is a bit different from VLIW 06:08:20 but similar in concept 06:08:54 VLIW dependency is handled by keeping everything in some exact known sync 06:09:53 compiler scheduler knows the sync and fills the instruction slots 06:10:25 generally it works well for DSP code (lots of multiplies and adds etc) but not well at all for load-store-jump code 06:10:33 which is why VLIW is typically used in DSPs 06:11:04 ah right 06:11:10 well I'm basically thinking of the Verity model but on a CPU 06:11:33 some CPUs simply run all loads and stores in-order 06:11:36 if two things don't have dependencies on each other, you run them in parallel 06:11:44 everything else can be reordered willy-nilly though 06:12:20 this means that the CPU needs to be able to handle large numbers of threads at once (probably a few hundred in registers, and swapping if the registers get full), and needs very cheap fork/join 06:12:23 ais523 : true, but if your two things are memory addresses calculated late in the pipeline, it's very hard to tell that they have dependencies 06:12:35 OTOH, so long as you have enough threads available, you don't care much about memory latency, only bandwidth 06:12:46 just run something else while you're waiting 06:12:59 this is similar to GPUs but GPUs are SIMD at the lowest levels, this is MIMD 06:13:20 mad: well the dependencies would be calculated by the compiler 06:13:36 compiler can only calculate so many dependencies 06:13:39 ideally via the use of a language in which aliasing problems can't happen 06:14:02 ais523: ALIW and OLIW are some alternatives to RISC, CISC, and VLIW 06:14:03 in fact the ideal situation for the compiler is that loads and stores never move 06:14:14 every other instruction is easy to move 06:14:30 in most practical languages, though, loads and stores happen a lot 06:14:42 hmm, can we invent some sort of functional memory for functional languages? 06:14:44 it's just calculations and it's all in SSA form so it knows exactly what depends on what and how to reorder stuff 06:14:53 i.e. memory never changes once allocated, it can go out of scope though 06:14:59 ais523: I thought of that once- the ASM of Haskells 06:15:08 what I was thinking of was C++ with absolutely no pointers 06:15:22 just use Verity :-P 06:15:23 and every object or array is copy-on-write 06:15:31 there have been some experiments of getting it to run on CPU 06:16:02 no dynamic typing or garbage collection or other slow features 06:16:15 ais523: What other properties should the FMM have? 06:16:27 only copy-on-write because it's the one thing that can prevent aliasing 06:16:41 hppavilion[1]: FMM? 06:16:48 ais523: Functional Memory Model 06:17:06 mad: not the only thing, you can use clone-on-copy instead 06:17:09 it's just slower usually 06:17:36 (it's faster for very small amounts of data, around the scale of "if you have fewer bits in your data than you do in an address") 06:17:41 but then don't you need references if you use clone-on-copy 06:17:45 ? 06:18:30 references so that you can point to objects that you're going to read from without doing tons of copies 06:18:40 I didn't say it was efficient 06:18:42 just that it works 06:19:11 that's why I'm suggesting copy-on-write 06:19:26 hppavilion[1]: the main problem with a functional memory model is handling deallocation 06:19:39 you can a) use reference counts, b) use a garbage collector, c) clone on copy 06:19:54 method c) is used by most esolang impls AFAIK 06:20:15 what do haskell etc use? 06:21:07 ais523: Interesting... 06:21:44 mad: normally garbage collectors, for most workloads it's the most efficient known solution 06:21:57 although it requires a lot of complexity to get it more efficient than reference counting 06:22:41 can functional programming generate cycles? 06:22:43 I personally like reference counting, especially because it allows you to implement an optimization whereby if something is unaliased at runtime (i.e. the reference count is 1), you can just change it directly rather than having to copy it first 06:23:07 that's what copy-on-write is no? 06:23:27 there are language features which can cause cycles to be generated; however, some functional languages don't include those features 06:24:02 copy-on-write doesn't necessarily check for refcount 1, some implementations check for never-cloned instead 06:24:25 which means that you don't have to update the refcount when something leaves scope 06:24:41 but what if it was cloned but then the clone went out of scope? 06:24:47 then you have a useless copy 06:24:50 yep 06:25:07 but without a refcount you don't know it's useless until the next gc cycle 06:25:46 the idea of having COW everything is that also when you need a copy, typically you only need a copy of the topmost layer 06:25:58 it's possible that the extra copies are faster than the refcount updating 06:26:02 ie an object containing a bunch of sub-objects 06:26:14 most likely because you're just copying a wrapper that contains a couple of pointers 06:26:24 if you have to copy the object, you don't need any copy of the sub-objects 06:26:31 except the ones that are really different 06:26:32 and yes, I think we're making the same point here 06:27:33 how expensive is refcounting anyways? 06:27:37 it's just +/- 06:27:53 it's pretty expensive because it screws up your cache 06:28:12 whenever something gets copied or freed, you have to a) dereference it, b) write a word of memory next to it 06:28:37 which means that less fits in your cache, and copy and free operations end up bumping something into cache that isn't immediately needed 06:28:47 -!- mysanthrop has changed nick to myname. 06:28:48 isn't it reading in 1 cache line that's probably going to be read by whatever next object operation on that object? 06:29:02 for a free, you probably aren't planning to use the object again for a while ;-) 06:30:00 well, for a free you start by -- refcount, checking it, it's 0, then you have to go through the whole destructor so that's more accesses to object variables no? 06:31:42 oh, you're assuming there's a nontrivial destructor 06:31:54 I'm not, destructor is often trivial 06:32:23 well, it must decrease child object refcounts no? 06:32:31 yes, /but/ we're comparing refcounting to GC 06:32:34 and eventually call free() 06:32:41 GC doesn't need to decrease the child object refcoutns 06:33:47 so it doesn't have a need to pull the object into cache 06:34:10 fwiw, I think there's little doubt that refcounting is better if you have a lot of nontrivial destructors 06:34:15 but that doesn't come up very often 06:34:42 hmm 06:35:37 -!- lambda-11235 has quit (Quit: Bye). 06:37:00 it sounds like it depends on the "shape" of the objects you're freeing 06:37:14 depending on average size and average number of levels 06:38:33 other issue is 06:38:50 suppose you have some large global object with some error logger in it 06:39:27 some function of some small object within that global object does whatever 06:39:33 and then logs an error 06:40:09 how do avoid forcing the user to make the function take the large global object as an explicit argument? :D 06:41:01 this is one of the largest problems in OO, possibly programming generally 06:41:12 there are a lot of proposed solutions but I'm not sure if any of them are actually good ones 06:41:46 I know only the C++ solution, which is that you store a pointer to the large global object in the small object 06:41:53 but then that breaks any purity 06:42:18 look up dependency injection, it's crazy 06:42:38 and it introduces a reference cycle 06:42:59 err, dependency injection frameworks 06:43:11 dependency injection itself is just the concept of passing the large global as an argument 06:43:19 but the interest comes from doing it /implicitly/ 06:43:36 normally via some sort of code transformation, either at compile-time or run-time 06:43:39 (which is why it's crazy) 06:44:20 -!- nortti_ has changed nick to nortti. 06:46:17 anyhow 06:47:04 without solving aliasing then basically you're designing a cpu for executing C++ 06:47:54 and I don't think it's possible to design a cpu for higher level languages 06:48:31 because C++ tends to have all the real low latency operations basically 06:48:58 and in particular the ones that have few sideeffects 06:49:04 side effects are deadly 06:50:26 well I don't think a language can be considered higher-level nowadays if it doesn't provide at least some way to manage side effects 06:51:03 dunno, aside from functional languages 06:51:23 my impression is that most high level languages have great tools for CAUSING side effects 06:51:27 :) 06:52:02 witness all the perl-python-lua-js type of languages that never even got multithreading 06:55:11 I can't think of any approach other than multithreading and functional-style-purity for managing side effects 06:55:32 especially long-term side effects 06:56:25 for short term side effects generally you have the whole LLVM style thing where it uses SSA on non-memory values and then LLVM-style alias resolution loads/stores 06:56:33 and...that's it! 06:57:27 unless you count SIMD as a form of side-effect management 06:57:32 (which I guess it is!) 06:58:04 -!- dingbat has joined. 07:01:10 that's why the MIPS is still the "top" design in a way 07:01:30 -!- Sprocklem has joined. 07:04:32 mad: well Verity compiles via an intermediate language SCI, which has the property that aliasing will fail to compile 07:04:51 although it sacrifices quite a lot to accomplish that 07:04:59 figures 07:05:54 well, it compiles to vhdl so it's essentially a low level language no? 07:05:55 -!- carado has joined. 07:06:50 mad: Verity is low level, yes 07:07:04 however the principles behind SCI were originally expressed in a language which was (at the time, at least) pretty high level 07:10:40 if you're going towards agressive threading then the target kind of cpu is pretty clear 07:10:50 stick in a bunch of in-order RISCs 07:10:58 as many as you can fit 07:11:32 each new core = new DCACHE = 1 more potential load per cycle 07:11:50 or 2 loads if you have a 2 port DCACHE 07:12:38 I think you also need to have more threads "ready to go" than you do CPUs 07:12:46 yeah 07:12:53 so that you can suspend some while waiting for memory access, branch prediction failure, etc. 07:12:59 you'll probably want some degree of hyperthreading to fill in stalls 07:13:01 yes 07:13:09 actually if you have enough hyperthreads you needn't even bother to predict branches 07:13:19 just run something meanwhile while working out whether to take them or not 07:13:34 hm 07:14:28 I think the branch predictor is worth the trouble 07:14:41 it's not that complex at low IPC 07:15:05 also at low IPC your pipeline is likely to be short 07:16:00 this is basically the ultraSPARC 07:16:28 oriented towards load-store-jump code that has lots of threads 07:16:31 ie servers 07:17:26 you could totally write a compiler to use lots of threads if they were that lightweight 07:17:34 and they'd be very load-store-jump-mimd heavy 07:18:15 you'd need some sort of threading that doesn't have to go through the OS's scheduler 07:19:08 and get people to use tons of small threads in their code 07:19:22 yes 07:19:35 the latter is something that'll be increasingly necessary to increase performance as time goes on 07:19:59 and hardware thread scheduling is a natural extension of that 07:20:20 the problem is that generally if the OS's scheduler is involved, that probably already wipes out your potential benefits in lots of cases 07:20:42 ais523: have you looked at Rust? I don't remember if it came up yet and whether I've told my first impression opinions. 07:20:47 also there's a limit to how much threading you can get going 07:21:08 b_jonas: yes, this channel used to have a lot of rust discussion 07:21:11 every cpu you add to a system makes the synchronization system between core memories harder 07:21:13 I like it 07:21:24 that said, I don't think I know your opinion on Rust, either because you haven't told me or because I've forgotten 07:21:35 mad: NUMA 07:22:08 that's starting to sound like the PS3's CELL :D 07:23:11 it was ahead of its time 07:23:51 NUMA is going to get more and more popular as time goes on, basically because there just isn't really any other option if we want computers to keep getting faster in terms of ability-to-execute-programs 07:24:11 there's always aggressive SIMD 07:24:48 which gives you nothing for load-store-jump programs 07:25:07 but I don't think anything's going to help load-store-jump programs by this point 07:25:55 mad: simd and numa have different roles. they both help, and I'm very interested in simd, but at some point even if you write optimal simd programs to reduce memory and cache load, you'll run out of memory bandwidth, and numa is the only technically realistic way to increase it 07:26:02 the problem with SIMD is that although it's good for some workloads, those are typically the workloads you'd run on a GPU 07:26:15 ais523: that's not quite true 07:26:18 so it's more of a stopgap until people get better at writing multithreaded programs 07:26:20 ais523: no way 07:26:31 CELL worked because video games have some mathy calculations to offload 07:26:57 ais523: it's that people are buying into the GPU hype and very few people are trying to learn to actually use SIMD and cpu programming in a good way 07:27:16 (this is partly why I'm very interested about it) 07:27:23 you can put hundreds of cores on a CPU if they can't access any memory :D 07:27:33 ais523: yes, there's some overlap, but still, I don't think GPUs will solve everything 07:27:59 gpus solve one problem, rendering video games 07:28:30 other problems might see a speed gain only as much as they look like video game rendering :D 07:28:34 GPUs actually have similar levels of SIMDiness to CPUs; their strength is that they can run the same code on thousands of threads, but not necessarily with the same control flow patterns 07:29:12 as far as I can tell the GPU's advantage is that basically memory writes only happen to the frame buffer 07:29:18 they're bad at pointer-heavy stuff, and in general, at things with unpredictable memory access patterns 07:29:24 so GPUs have essentially no aliasing to solve 07:29:55 mad: they have block-local storage, which is basically a case of manually-controlled caching 07:30:01 where you load and flush the cache lines manually 07:30:15 once aliasing comes into the picture (or heavy feedback loops) CPUs take the upper hand afaik 07:30:46 I might be dismissing gpu stuff too much due to how overhyped it is 07:31:08 mad: it's mostly just that GPUs are bad at pointers 07:31:27 it comes down to how few GPU-able problems there are I think 07:31:27 aliasing isn't any harder than dereferencing nonaliased memory, they're both hard 07:32:19 aliasing forces your memory operations to be in-order basically 07:32:36 and adds lots of heavy checks the more you reorder your operations 07:33:08 eventually you end up with giant content-addressable-alias-resolution buffers and whatnot 07:33:31 and everything becomes speculative 07:33:51 -!- mroman has joined. 07:34:17 well how useful is unpredictable aliasing from a program's point of view? 07:34:25 b_jonas: SIMD is a good fit for "occasional", "one-off" computations. GPGPU is a good fit for "pervasive" large computations. people seems to easily confuse the differences. 07:34:57 lifthrasiir: hmm: what would you say is the best way to zero a large amount of RAM? 07:34:59 ais523 : it's mandatory to guarantee correctness 07:35:06 (and when one needs occasional large computations, one is advised to avoid them) 07:35:08 mad: not from the compiler's point of view 07:35:09 the program itself 07:35:22 how often do you write a program that benefits from aliasing, and can't predict where it happens in advance? 07:35:28 ais523: DMA. 07:35:31 sorry, kidding! 07:35:42 lifthrasiir: that didn't seem that stupid to me 07:35:45 well 07:35:58 I was actually thinking that systems might benefit from a dedicated hardware memory zeroer 07:36:10 Windows apparently zeroes unused memory in its idle thread 07:36:18 ais523: but I think it is not a good way to approach the problem. why do you need a large amount of zeroed memory after all? 07:36:29 as something to do (thus it has a supply of zeroed memory to hand out to programs that need it) 07:36:59 then I guess SIMD or other OS-sanctioned approach is the necessary 07:37:05 lifthrasiir: basically a) because many programs ask for zeroed memory; b) you can't give programs memory that came from another program without overwriting it all for security reasons, so you may as well overwrite with zeros 07:37:06 GPGPU is not really an option there 07:37:11 well, if you write to a variable, eventually you're going to want to read from it 07:37:20 fundamentally that's aliasing 07:37:26 GPGPU could zero GPU memory quickly just fine; the problem is that it uses different memory from the CPU 07:37:30 and the copy between them would be slow 07:37:38 yes. that's why it is not an option 07:37:40 (right now) 07:38:18 DMA is a joke, but the hardware-wired way to zero memory may be somehow possible even in the current computers 07:38:23 mad: yes but often both pointers are literals (because you use the same variable name both times), so the aliasing is predictable 07:38:31 for instance, a delay buffer for an echo effect 07:38:44 how fast it aliases depends on the delay time you've set 07:39:11 yes, that's a good example of a "memmove alias" 07:39:19 ais523 : aliasing isn't predictable if you use very large array indexes :D 07:39:43 I'm kind-of wondering, if restrict was the default in C, how often would you have to write *unrestrict to get a typical program to work 07:39:49 mad: larger than the array, you mean? :D 07:40:34 yeah but the cpu doesn't know the array size 07:40:43 most of the time even the compiler doesn't know 07:41:01 -!- tromp has quit (Remote host closed the connection). 07:41:12 mad: well that at least is clearly something that can be fixed by higher-level languages 07:41:20 there's also the case of, well, you're accessing a class that has pointers in it 07:41:38 and it's hard to tell when your code will read out one of those pointers and write to that data 07:42:14 you do know what restrict means, right? 07:42:21 -!- AnotherTest has joined. 07:42:34 "data accessible via this pointer parameter will not be accessed without mentioning the parameter in question" 07:42:35 ais523 : higher-level languages can abuse references to cause surprise aliasing 07:43:05 I wasn't aware of the exact semantics of restrict 07:43:07 example? mostly because it'll help me understand what you're considering to be higher-level 07:44:04 hmm 07:44:15 consider a java function working on some array 07:44:23 [GPUS] they're bad at pointer-heavy stuff, and in general, at things with unpredictable memory access patterns” – are they also bad at unpredictable local sequential access of memory, such as decoding a jpeg-like huffmanized image that's encoded as 256 separate streams, you have an offset table for where the huffman input of each stream and the output of each stream starts, 07:44:38 and within one stream, you can read the huffman input and the output pixels roughly sequentially? 07:44:41 then it reads some member variable in one of the objects it has as an argument 07:45:03 the member variable is a reference to the same array the java function is working on 07:45:09 and it uses it to poke a value 07:45:38 I'm kind-of wondering, if restrict was the default in C, how often would you have to write *unrestrict to get a typical program to work” – isn't that sort of what Rust is about? 07:45:42 b_jonas: so long as what you're indexing is either a) stored in memory that's fast to read but very slow to write, or b) fits into block memory (basically a manually-controlled cache), you can dereference pointers 07:46:02 and I don't think that's how restrict in C works 07:46:02 b_jonas: it's similar, yes 07:46:25 -!- AnotherTest has quit (Ping timeout: 240 seconds). 07:46:38 mad: that's nothing to do with Java being high-level, IMO 07:46:55 this example applies to most non-pure languages 07:47:03 storing a reference to something inside the thing itself is a pretty low-level operation 07:47:05 like perl and python and whatnot 07:47:07 afaik 07:47:23 well, your function gets some array argument 07:47:24 actually, if you do that in Perl, you're supposed to explicitly flag the reference so as to not confuse the garbage collector 07:47:30 and some object 07:47:33 *reference counter 07:47:43 and the object has a reference to the array but you don't know 07:48:25 ais523: well, if there are 256 streams, and you're decoding only one channel at a time and assembling the three channels later in a second pass, then each stream should be at most 8192 bytes long, its output also 8192 bytes long, plus there's a common huffman table and a bit of control information. 07:48:36 there's no self reference in my example 07:49:06 mad: well, say, in SCI (which is designed to avoid aliasing), if you give a function two arguments, any object can only be mentioned in one of the arguments 07:49:08 Oh, and some local state for each 8x8 block that might take say 512 bytes. 07:49:15 b_jonas : isn't hufman decoding inherently sequential? 07:49:26 (I'm assuming a 2048x1024 pixel image, 8 bit depth channels.) 07:49:51 mad: yes, but if you use a shared huffman table and you mark where each stream starts in the input and output, then you can decode each stream separately 07:50:20 mad: that is actually practicaly for image decoding, and also for image encoding or video de/encoding, but those get MUCH hairier and more complicated 07:50:22 ais523 : if it avoids aliasing then it's in a different category 07:50:40 mad: I'm saying that putting limits on aliasing is higher-level than not putting limits on aliasing 07:50:46 mad: note that this is pure huffman encoding, like jpeg, not deflate-like copy operations from a 16k buffer of previous output. 07:50:48 because it means that you have more information about the data you're moving around 07:51:07 mad: the copy operations are why PNG/zip decompression is really impossible to parallelize or implement fast these days 07:51:39 gzip/zip/PNG made lots of sense when they were invented, but less sense for today's hardware 07:52:03 b_jonas: deflate uses references to locations earlier in the output, right? how much would it change if it used references to locations as they were in the input file? 07:52:03 but JPEG is just as old and ages much better, which is why most modern video formats are similar to it, even if different in lots of specifics 07:52:17 in terms of compression ratio 07:52:17 b_jonas : I guess it works if you have multiple huffman segments that you know the start of 07:52:44 ais523: I'm not sure, I don't really know about modern compression algorithms, and it probably depends on what kind of data you have. 07:52:50 that seems to be GPU-acceleratable, although I haven't worked out the details yet 07:52:50 mad: actually I managed to persue my friend to write the similar thing with the existing deflate stream 07:53:32 doesn't every huffman symbol basically depend on the previous one? 07:53:42 ais523: encoding a video also references previous frames, but in a way than I think is much nicer than gzip, because you only reference one or two previous frames, so you can decode per frame. it might still get ugly. 07:53:45 or specifically the length of the previous one 07:54:06 mad: the point is that DEFLATE uses the end code that is distinctive enough that it can be scanned much quicker 07:54:16 then the friend stucked on the LZ77 window :p 07:55:05 -!- andrew_ has quit (Remote host closed the connection). 07:55:07 has anyone ever done some graph related database stuff? 07:55:09 (it was a term project AFAIK, and the friend did get A even though the prototype was only marginally faster) 07:55:19 Maybe I should write a toy image format and encoder and decoder, just to learn about how this stuff works, even if I don't get anything practically usable. 07:55:24 (since everyone else was doing JPEG decoder stuff) 07:55:33 mroman: I looked into it a bit for aimake 4 07:55:39 but didn't reach the point where it came to actually write the code 07:55:42 (There are already lots of practical image coders out there.) 07:55:43 so so far, all I have is plans 07:56:20 let's assume I have paths in my database A -> B -> D and A -> C -> D 07:56:26 ais523 : I think "non aliasing" for higher language tends to be a synonym for "pure/no side effects" and often "functional" or maybe even "lazy-evaluated functional" 07:56:52 mad: err, the Haskell-alikes have tons and tons of aliasing 07:56:52 and I want to know for example if there's a traffic jam on A -> D 07:56:58 they're just constructed so that it never matters 07:57:08 it doesn't HAVE to be this way but afaik all the "no side effects" languages are functionnal 07:57:10 mad: to be more exact: DEFLATE stream stores the (encoded) tree in the front, and the tree is structured so that every prefix code is ordered by the length of code and then by the lexicographical order. since the end code is least frequent it should appear at the very end, i.e. all 1s. 07:57:26 ais523 : afaik haskell has no real aliasing? 07:57:43 > let x = 4 in let y = x 07:57:44 :1:14: parse error in let binding: missing required 'in' 07:57:51 > let x = 4 in let y = x in y 07:57:53 4 07:58:05 actually GHC probably optimized the aliasing there out 07:58:13 mad: the typical stream has 10--14 one bits for the end code, so the decompressor may try to speculatively decode the stream from that point 07:58:24 but x and y would be aliases in a naive Haskell implementation 07:58:31 there's just no way to tell from within Haskell itself 07:58:34 (and the project was for CELL processor, quite amenable for this kind of things) 07:58:56 because if two things alias, the normal way you tell is either to use a language primitive that tells you that, or to modify one and see if the other changes 07:59:17 ais523 : yes but they're basically not real aliases because you can't write in one and get surprise changes in the other 07:59:20 the traffic jam could be between A -> B, B -> D, A -> C, C -> D or A -> D itself 08:00:00 multiple readonly pointers to the same block of memory isn't a problem 08:00:00 mroman: huh, that's an interesting operation 08:00:12 mad: keep going and you'll invent Rust ;-) 08:00:24 other questions are: Are there paths from A to D that are not equally fast. 08:00:26 the problem is when one of this pointers writes something 08:00:42 and it's impossible to say which other pointers will see the write 08:01:10 at local level it's usually possible to figure it out (LLVM's alias solving does this) 08:01:18 at global level it becomes impossible 08:01:23 mroman: the SQLite docs have an example of doing transitive closure via a recursive query 08:01:47 I'm not sure if the performance is better or worse than running Dijkstra's algorithm from outside with a series of queries 08:01:56 that's one of x86's "voodoo" advantages 08:02:05 ais523: I have to afk for some hour now, but I can tell my preliminary opinion on rust later. 08:02:08 it doesn't require memory reordering to perform well 08:02:14 (the constant factor should be better, but the asymptotic performance might be worse if it's using a bad algorithm) 08:02:52 if it was possible to do more efficient memory reordering then x86 would be gone by now 08:03:41 some RISC or VLIW would have been twice as fast as x86 and everybody would be switching 08:05:41 as it is, the best cpu design practice, as far as I can tell, is to assume that loads/stores aren't going to move, and rearrange basically everything else around them 08:07:56 result: out-of-order execution 08:10:04 itanium tried to do compile time rearranging with some complex run-time checking+fallback mechanism 08:10:06 and it failed 08:15:57 -!- Elronnd has quit (Quit: Let's jump!). 08:21:21 -!- Elronnd has joined. 08:41:33 -!- tromp has joined. 08:46:18 -!- tromp has quit (Ping timeout: 276 seconds). 08:54:12 -!- hppavilion[1] has quit (Ping timeout: 244 seconds). 09:00:14 -!- bender| has joined. 09:04:30 -!- olsner has quit (Ping timeout: 276 seconds). 09:09:20 -!- ais523 has quit. 09:21:06 -!- AnotherTest has joined. 09:25:57 -!- AnotherTest has quit (Ping timeout: 268 seconds). 09:29:33 -!- J_Arcane has quit (Ping timeout: 240 seconds). 09:30:48 -!- olsner has joined. 09:36:34 -!- olsner has quit (Ping timeout: 240 seconds). 09:38:37 [wiki] [[Talk:Brainfuck]] https://esolangs.org/w/index.php?diff=46491&oldid=46410 * Rdebath * (+4885) Shortest known "hello world" program. -- Define "shortest"! 09:45:55 -!- andrew_ has joined. 09:59:25 -!- andrew_ has quit (Remote host closed the connection). 10:13:17 -!- nisstyre_ has changed nick to nisstyre. 10:13:27 -!- nisstyre has quit (Changing host). 10:13:27 -!- nisstyre has joined. 10:16:26 -!- AnotherTest has joined. 10:19:11 -!- int-e_ has changed nick to int-e. 10:25:59 -!- AnotherTest has quit (Ping timeout: 260 seconds). 10:35:23 -!- olsner has joined. 10:42:11 -!- tromp has joined. 10:45:42 -!- jaboja has joined. 10:46:18 -!- tromp has quit (Ping timeout: 244 seconds). 11:37:27 -!- boily has joined. 11:42:25 -!- jaboja has quit (Ping timeout: 240 seconds). 12:16:30 FUNGOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOT! 12:16:41 `? fungot 12:17:04 fungot is our beloved channel mascot and voice of reason. 12:18:56 FireFly: MASCOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOT! 12:19:10 oops, wrong autocompletion. 12:19:34 fizzie: MASCOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOT! FUNGOOOOOOOOOOOOOOOOOOOOOOOOT! !?!???!?!?!!???!!!!!! 12:23:22 -!- boily has quit (Quit: NONPLUSSING CHICKEN). 12:51:19 -!- jaboja has joined. 12:53:43 -!- fungot has joined. 12:53:49 TOO LATE 13:02:49 fungot, how are you doing 13:02:49 Taneb: i'm sure it appeared on l:tu or winxp? ;p 13:09:29 -!- oerjan has joined. 13:36:29 -!- spiette has joined. 13:48:08 -!- AnotherTest has joined. 13:56:01 -!- jaboja has quit (Ping timeout: 240 seconds). 14:28:25 -!- Alcest has joined. 14:30:40 -!- zadock has joined. 14:42:14 @tell mad can functional programming generate cycles? <-- in haskell it can, e.g. lst = 1 : lst defines a cyclic list, which is nevertheless immutable. (Technically you can in ocaml too, but only for simple constant initializers.) 14:42:14 Consider it noted. 14:52:33 -!- `^_^v has joined. 14:56:24 -!- lambda-11235 has joined. 15:24:49 -!- UrbanM has joined. 15:28:03 hi please check out my website . http://sh.st/RptZh... ty :) i promise its not a virus 15:28:45 -!- tromp has joined. 15:29:16 got a virus 15:30:45 hi please check out my website . http://sh.st/RptZh... ty :) i promise its not a virus 15:32:13 -!- ChanServ has set channel mode: +o oerjan. 15:32:34 -!- oerjan has set channel mode: +b *!*Master@*.38.31.175.cable.t-1.si. 15:32:34 -!- oerjan has kicked UrbanM You are not _our_ Urban M. 15:33:03 -!- tromp has quit (Ping timeout: 244 seconds). 15:39:29 oerjan: of course the immutability of Haskell is a lie. 15:39:54 (I'm alluding to thunk updates.) 15:40:24 who is urban m? 15:41:21 brainfuck guy... yes 15:41:27 https://esolangs.org/wiki/Urban_M%C3%BCller 15:41:59 (ah, there was a question mark before the ellipsis. I typed that, then googled to confirm.) 15:42:55 however... the user above looked more like and imposter 15:44:04 sh.st... "shorten urls and learn money"... sounds legitimate 15:49:51 so what do we get... googla analytics, tons of ads, some trackers, and did they actually put a captcha before the embedded link? 15:50:06 (I'm looking at page source code) 15:51:06 and there's a ton of javascript I haven't looked at. 15:52:57 -!- XorSwap has joined. 15:55:41 -!- lambda-11235 has quit (Quit: Bye). 15:57:42 int-e: thus i also mentioned ocaml hth 15:57:47 -!- oerjan has set channel mode: -o oerjan. 16:00:26 btw does ghc allocate a thunk for a simple lst = 1 : lst; lst :: [Int] 16:06:31 what is an l2 job? 16:06:41 -!- bender| has quit (Ping timeout: 250 seconds). 16:06:51 jobs outside of italy are so hard to grasp 16:08:16 -!- augur has joined. 16:09:31 -!- mroman has quit (Quit: Lost terminal). 16:12:06 -!- oerjan has quit (Quit: Later). 16:24:24 -!- augur has quit (Remote host closed the connection). 16:24:58 -!- augur has joined. 16:29:38 -!- augur has quit (Ping timeout: 250 seconds). 16:40:29 @tell oerjan btw does ghc allocate a thunk for a simple lst = 1 : lst <-- wow, apparently not (checked assembly output from ghc-7.10.2 with -O2, native code gen) 16:40:29 Consider it noted. 16:43:06 @tell oerjan even ghc-7.6.3 didn't allocate a thunk, that's as far back as I can easily go 16:43:06 Consider it noted. 16:50:47 -!- zzo38 has joined. 16:55:38 -!- Treio has joined. 17:04:31 -!- jaboja has joined. 17:15:35 -!- Treio has quit (Quit: Leaving). 17:17:03 -!- XorSwap has quit (Ping timeout: 240 seconds). 17:44:11 -!- XorSwap has joined. 17:54:06 -!- augur has joined. 18:06:44 -!- augur has quit (Remote host closed the connection). 18:09:19 -!- lambda-11235 has joined. 18:14:01 -!- MoALTz has joined. 18:33:44 https://github.com/bloomberg/bucklescript 18:38:19 -!- lleu has joined. 18:39:38 -!- augur has joined. 18:46:04 -!- heroux has quit (Ping timeout: 264 seconds). 18:46:47 -!- XorSwap has quit (Ping timeout: 244 seconds). 18:49:59 -!- augur has quit (Read error: Connection reset by peer). 19:08:07 -!- zadock has quit (Quit: Leaving). 19:11:01 -!- lynn has joined. 19:14:12 -!- heroux has joined. 19:21:10 -!- XorSwap has joined. 19:31:22 -!- hppavilion[1] has joined. 19:33:31 I am here 19:40:45 Did you work out those categories? 19:42:34 shachaf: I'm actively working on that xD 19:43:54 shachaf: I'm currently trying to figure out the type of the arrows in example (A) 19:44:12 ("Type" may not be the correct word, but it gets the point across if I send this message) 19:44:24 The type of an arrow from A to B is A -> B 19:44:46 shachaf: Yeah, I mean I'm trying to figure out what they represent 19:45:12 shachaf: I think the only thing I've figured out is that in (A), composition represents the transitive property of ≤ 19:45:35 Yes. 19:45:40 What does identity represent? 19:45:59 shachaf: The fact that a value is less than or equal to itself 19:46:04 (specifically, x = x) 19:46:10 aka reflexivity 19:46:12 (there for x ≤ x) 19:46:17 int-e: Yes, yes. 19:50:15 -!- lambda-11235 has quit (Ping timeout: 264 seconds). 19:51:17 shachaf: Wait, do arrows just represent arbitrary relations? 19:51:22 An arrow doesn't have to represent anything. 19:51:29 shachaf: Oh. 19:51:38 shachaf: So an arrow can just be an arrow? 19:51:46 It doesn't have to represent a function? 19:51:51 Or funtro 19:51:54 *functor 19:52:01 Or transformation of any sort 19:52:05 -!- lambda-11235 has joined. 19:52:06 Sometimes an arrow is just a cigar. 19:52:26 shachaf: Is arrow a type of cigar? 19:52:32 hppavilion[1]: you can interpret any relation on a set as a directed graph with that set as nodes (allowing loops, not allowing multiple edges) 19:52:56 Arrows don't have to represent functions, no. 19:52:58 I don't smoke, so if it is a type of cigar I wouldn't get the joke 19:53:01 shachaf: Well yeah 19:53:02 Or transformations, whatever that is. 19:53:08 shachaf: It was the best word I could think of 19:53:19 but you really need reflexivity and transitivity to make a category that way 19:53:21 shachaf: Do arrows have to mean something, or can they just be arrows? 19:53:35 they can be just arrows 19:53:40 OK 19:53:47 int-e: And is that the case for category (A)? 19:54:20 int-e: Where a -> b iff a <= b 19:54:29 I don't know what example (A) refers to. 19:54:37 int-e: That ^ 19:54:39 ah. 19:54:57 well, arguably the underlying relation gives the arrow *some* meaning 19:55:12 it's really a philosophical question at this point. 19:55:15 int-e: Ah 19:55:36 int-e: But do they not represent anything in the way Set has arrows representing functions? 19:56:18 right 19:56:38 int-e: Or could it be argued that they represent Void? xd 19:56:39 *xD 19:56:45 (That was a joke, I think) 20:01:01 -!- Phantom_Hoover has joined. 20:17:43 -!- lambda-11235 has quit (Quit: Bye). 20:19:04 -!- XorSwap has quit (Ping timeout: 252 seconds). 20:42:54 -!- p34k has joined. 20:46:09 To allow other program to change resources of a window in the X window system, you could have the other program appends a null-terminated string to a property on that window, and then that client watches that property and reads and deletes it and adds that string into the resource manager. You can also send commands that aren't resources too in the same way, by adding a prefix to specify 20:47:31 Add RESOURCE_MANAGER into the WM_PROTOCOLS list to specify that this function is available, I suppose. 20:48:03 -!- spiette has quit (Ping timeout: 240 seconds). 20:48:17 Does it make sense to you? 20:52:17 The format of the property must be 8, the type must be STRING, and the mode must be PropModeAppend. 21:03:01 -!- spiette has joined. 21:05:16 -!- `^_^v has quit (Quit: This computer has gone to sleep). 21:17:12 -!- augur has joined. 21:24:33 -!- augur has quit (Ping timeout: 240 seconds). 21:30:16 -!- ais523 has joined. 21:33:19 -!- hppavilion[1] has quit (Ping timeout: 252 seconds). 21:33:54 -!- hppavilion[1] has joined. 21:34:16 -!- spiette has quit (Quit: :qa!). 21:35:00 -!- spiette has joined. 21:39:22 -!- hppavilion[1] has quit (Ping timeout: 252 seconds). 21:47:31 -!- J_Arcane has joined. 22:04:25 -!- AnotherTest has quit (Quit: ZNC - http://znc.in). 22:43:27 -!- spiette has quit (Quit: :qa!). 22:47:13 -!- jaboja has quit (Remote host closed the connection). 22:49:51 whoa 22:50:31 I'm trying to line up the Szabó Lőrinc translation and the original of Tennyson: Ulysses exactly. But it turns out the translation is one line shoter. 22:50:50 It's missing the line that would correspond to “Death closes all: but something ere the end,” 23:08:27 -!- ais523 has quit. 23:27:37 -!- oerjan has joined. 23:27:46 -!- shikhin has changed nick to shikhun. 23:28:17 -!- shikhun has changed nick to shikhin. 23:28:20 @messages- 23:28:21 int-e said 6h 47m 51s ago: btw does ghc allocate a thunk for a simple lst = 1 : lst <-- wow, apparently not (checked assembly output from ghc-7.10.2 with -O2, native code gen) 23:28:21 int-e said 6h 45m 14s ago: even ghc-7.6.3 didn't allocate a thunk, that's as far back as I can easily go 23:34:03 -!- FreeFull has quit (Quit: Rebooting). 23:36:13 -!- FreeFull has joined. 23:41:15 -!- b_jonas has quit (Ping timeout: 250 seconds). 23:42:17 -!- b_jonas has joined. 23:43:30 -!- p34k has quit. 23:43:37 -!- hppavilion[1] has joined. 23:47:52 -!- boily has joined. 23:48:12 fungot: fungellot. 23:48:13 boily: it is edited to remove the 0 parameter? i was thinking 23:48:20 fungot: no you weren't. 23:48:21 boily: ( code is 2.1mb so that may not have 23:48:26 fungot: yes I do. 23:48:26 boily: wait a minute!! i am having trouble with this stub generator. it has. 23:48:47 fungot: stub generators suck, stubs suck, and asynchronous services especially suck. 23:48:47 boily: sperber was here mar 17 at 11:11 pm utc, saying: or check out file-select and tcp-listener-fileno 23:48:53 "it has." seems a bit too stubby indeed. 23:49:25 boheily. 23:49:59 hellørjan. 23:50:22 @@ @tell oerjan @@ @@ (@where weather) ENVA KOAK 23:50:22 who's sperber? 23:50:54 hellochaf. 23:50:57 @messages- 23:50:58 Plugin `compose' failed with: <> 23:50:59 You don't have any messages 23:51:13 ho hum. 23:51:28 boily: Good afternoon, person. 23:52:04 boily: i dunno but he was there mar 17 hth 23:52:21 silly oerjan 23:52:25 mar 17 hasn't happened yet 23:52:43 then why is fungot using past tense, duh 23:52:44 oerjan: with the procedure for-each? ie i have a question about static links. i really should read up on macros? like atom? 23:53:13 time to a fungot is an irrelevant concept hth 23:53:14 boily: i don't apply this level of dynamic typing... it mentioned that static typing is in the browser while allowing quick access to the enclosing command. 23:53:26 fungot: are you a dreen 23:53:26 oerjan: because bash gets exactly 3 parameters with that invocation, and 0 added to any number of arguments, you have 23:54:40 fungot: hingot 23:54:41 shachaf: some may.....but not all. but many more possibilities than chess. many. most things just work. at least now atm 23:54:55 ^style calvinandhobbes 23:54:55 Not found. 23:54:58 What! 23:55:10 fizzie: plz fix twh hth 23:58:26 * boily wraps fungot in a chicken costume 23:58:27 boily: and i think he said some weird things involving crazy symbols and actions. i'm purely interested in the same ballpark, and roughly between chicken and stalin might be one way of doing that 23:59:20 -!- grabiel has joined.