00:12:02 -!- b_jonas has quit (Quit: leaving). 00:50:02 -!- Sgeo has quit (Ping timeout: 245 seconds). 01:17:01 -!- adu has quit (Quit: adu). 01:39:48 -!- MDude has quit (Ping timeout: 244 seconds). 02:13:54 -!- Sgeo has joined. 02:30:06 -!- moony has quit (Quit: Bye!). 02:30:47 -!- Bowserinator_ has joined. 02:31:03 -!- moony has joined. 02:32:46 -!- Bowserinator has quit (Ping timeout: 272 seconds). 02:34:31 -!- Bowserinator_ has changed nick to Bowserinator. 02:37:25 [[User talk:A]] M https://esolangs.org/w/index.php?diff=64621&oldid=64620 * A * (+272) /* Collaboration Request */ 02:38:55 -!- xkapastel has quit (Quit: Connection closed for inactivity). 02:39:33 -!- adu has joined. 02:41:59 [[User talk:A]] https://esolangs.org/w/index.php?diff=64622&oldid=64621 * A * (+577) /* Collaboration Request */ 02:44:11 [[User talk:A]] M https://esolangs.org/w/index.php?diff=64623&oldid=64622 * A * (+53) /* Collaboration Request */ 02:48:06 [[User talk:A]] M https://esolangs.org/w/index.php?diff=64624&oldid=64623 * A * (+0) /* Collaboration Request */ terrible grammar 02:55:24 [[User talk:A]] https://esolangs.org/w/index.php?diff=64625&oldid=64624 * Areallycoolusername * (+292) /* Collaboration Request */ 02:58:11 [[User talk:A]] M https://esolangs.org/w/index.php?diff=64626&oldid=64625 * A * (+269) /* Collaboration Request */ 03:00:38 [[User talk:A]] M https://esolangs.org/w/index.php?diff=64627&oldid=64626 * A * (+530) /* Collaboration Request */ 03:03:42 [[User talk:A]] M https://esolangs.org/w/index.php?diff=64628&oldid=64627 * A * (-16) /* Collaboration Request */ 03:09:04 [[Special:Log/newusers]] create * Tokitoki * New user account 04:05:46 [[User talk:A]] https://esolangs.org/w/index.php?diff=64629&oldid=64628 * Areallycoolusername * (+1085) /* Collaboration Request */ 04:06:05 [[User talk:A]] https://esolangs.org/w/index.php?diff=64630&oldid=64629 * Areallycoolusername * (+116) /* Collaboration Request */ 04:15:05 [[User talk:A]] https://esolangs.org/w/index.php?diff=64631&oldid=64630 * Areallycoolusername * (+48) /* Collaboration Request */ 04:15:26 -!- tromp has joined. 04:15:29 [[User talk:A]] https://esolangs.org/w/index.php?diff=64632&oldid=64631 * Areallycoolusername * (-2) /* Collaboration Request */ 04:18:03 -!- tromp_ has quit (Ping timeout: 264 seconds). 04:20:05 [[User talk:A]] M https://esolangs.org/w/index.php?diff=64633&oldid=64632 * A * (+321) 04:38:25 [[User talk:A]] M https://esolangs.org/w/index.php?diff=64634&oldid=64633 * A * (+254) /* Collaboration Request */ 04:41:41 -!- FreeFull has quit. 04:43:11 [[User talk:A]] M https://esolangs.org/w/index.php?diff=64635&oldid=64634 * A * (+275) /* Collaboration Request */ 04:50:36 [[User talk:A]] M https://esolangs.org/w/index.php?diff=64636&oldid=64635 * A * (+275) /* Collaboration Request */ 04:58:31 [[User talk:A]] https://esolangs.org/w/index.php?diff=64637&oldid=64636 * A * (+222) /* Collaboration Request */ 05:11:48 https://twitter.com/zeuxcg/status/1153530252942950400 05:11:51 kmc: Do you like this? 05:12:09 this = \rainbow{C++} 05:15:12 meh 05:16:01 -!- iconmaster has joined. 05:16:18 kmc: i saw a good cat 05:16:23 good cats are tg 05:16:40 bad cats are cursed 05:20:08 [[Seabass]] https://esolangs.org/w/index.php?diff=64638&oldid=64610 * JonoCode9374 * (+29) /* Examples */ 05:20:52 cool 05:21:05 I saw a big, fat orange cat in our backyard today 05:22:06 lucky 05:22:10 -!- sprocklem has quit (Ping timeout: 246 seconds). 05:28:15 yes 05:28:17 I suppose so 05:39:09 -!- sprocklem has joined. 05:41:19 -!- sprocklem has quit (Client Quit). 05:46:48 -!- sprocklem has joined. 06:40:18 -!- Frater_EST has joined. 06:40:32 -!- Frater_EST has left. 06:57:36 [[User talk:A]] M https://esolangs.org/w/index.php?diff=64639&oldid=64637 * A * (+298) /* Collaboration Request */ 07:23:08 -!- zzo38 has quit (Ping timeout: 245 seconds). 07:23:11 -!- shachaf has quit (Ping timeout: 245 seconds). 07:23:29 -!- shachaf has joined. 07:41:03 -!- iconmaster_ has joined. 07:45:25 -!- iconmaster has quit (Ping timeout: 276 seconds). 07:53:06 -!- Lord_of_Life has quit (Ping timeout: 248 seconds). 07:57:01 -!- Lord_of_Life has joined. 08:36:57 [[Seabass]] https://esolangs.org/w/index.php?diff=64640&oldid=64638 * JonoCode9374 * (+39) /* Truth-machine */ 08:38:50 [[Seabass]] M https://esolangs.org/w/index.php?diff=64641&oldid=64640 * JonoCode9374 * (-25) /* Examples */ Formatting 08:48:09 -!- cpressey has joined. 08:54:02 -!- arseniiv has joined. 09:02:01 -!- AnotherTest has joined. 09:43:25 Good morning. To answer Taneb's question of yesterday: yes, function types appear to have the same structure as state transitions. 09:44:08 Inputs to state machines are a little weird: an input is a named, overloaded function. 09:44:59 And I should stress this is just a structural similarity. There is a huge, uh, semantic gap here. Types are static things, but state machines are quite dynamic. 09:47:51 Do type systems based on algebraic data types even usually support overloaded functions (e.g. a la Java)? 09:48:21 I'm not sure I've ever seen one. 09:59:33 Does Haskell's typeclass-based polymorphism count? 10:04:47 -!- wob_jonas has joined. 10:05:54 cpressey: rust technically has overloaded methods of the same name, and it has an algebraic type system, but the type inference is a bit weird and incoherent 10:06:13 No sorry, not incoherent, as in, it's not supposed to prove impossible things, 10:06:53 I mean inconsistent, as in you can't guess what it will be able to infer and what you must explicitly annotate, and this can even change so you have to add new annotations when you add new methods. 10:07:36 Standard ML also has a few built-in operators that are overloaded, but you can't define new functions of that sort. 10:17:02 I can well imagine that function overloading does not play nicely with HM-style type inference. 10:17:30 (I want to like Rust, but it's really hard.) 10:19:17 cpressey: rust's name resolution rules are, in general, really strange to me 10:19:36 Why do you want to like Rust? Or: Why is it hard? 10:20:44 it seems like they were originally designed well, but then they wanted to make them a little more capable than they can be with reasonable rules, and because of that they had to add all sorts of magic in it to work at all 10:22:25 besides the method lookup, there's parts where if it has to look for a name among multiple modules, it will ignore sorts of names that don't make sense syntactically (not by type, but by sort as in const, static, enum constructor, struct constructor, function etc) 10:23:48 and there's the ugly part about matching to a name where it can either bind to a new local name or use a constructor with no arguments, and the only thing that stops typoes in constructor names to cause silent errors is that there are warnings if you don't name a constructor in upper case or a local variable in lower case. 10:24:31 OK, to be more honest, I *used* to want to like Rust. I have since given up. 10:24:55 cpressey: I still like rust, but still in the future sense that it will "soon" become a good language 10:24:58 I have a hard time liking a language with no specification that makes we write a file called "Cargo.toml" in order to use it 10:25:34 cpressey: oh, you don't have to use cargo. that part is full of shit, ignore it 10:25:37 just call the compiler directly 10:25:54 and if the javascript people tell you that you have to get packages from nodejs, ignore those too, 10:26:18 as well as the C or C++ people who tell you to use their favourite overengineered build system 10:26:53 the package repository is the sort of stupid dependency hell with low quality packages, you should ignore most of that 10:27:39 I am also amused by the fact that Haskell apparently has two package systems these days 10:27:53 and don't use cargo, except perhaps in as much as you need it to build existing good rust projects that use cargo, most importantly the rustc compiler 10:28:13 cpressey: that's normal. it'd be bad if a language or compiler tied you to a package system 10:29:25 Well, my point is not really about package systems, it's more about community, I suppose. Although it's also true that I believe you shouldn't judge a language by its community. 10:29:59 What languages do I actually *like*, anyway? 10:30:06 * cpressey thinks 10:32:14 Taneb: sorry, didn't see that. Typeclasses sort of count. But the set of state machines where the transitions only work like that would be - restricted. Not sure what the implications would be. 10:32:46 cpressey: yeah, that would be a good question. if you can tell what languages you like and what you like in them and what you don't like, that might help tell if rust is useful for you 10:33:33 C++ has overloaded functions, of course, but no real algebraic type system 10:36:01 I once tried to read an article about how to implement a linked list in Rust. It made it sound like utter hell. It also started out with a lengthy argument about how linked lists are a "niche data structure" that no one really uses. 10:38:20 Man, are there any good languages? 10:38:35 cpressey: what kind of linked list? the mutable one or the immutable haskell list one? the latter is trivial to implement. for the former, there are a lot of nontrivial decisions you have to make about the interface in any language if you try to make one, but once you decide what interface you really want it to have, it's not hard to implement. 10:39:25 the problem only comes from conflicting expectations if you believe those rust evangelists who tell you that you have to write a safe (in the rust sense) interface for everything, and at the same time want to have an interface that is impossible to do that way. 10:39:32 I think non-intrusive linked lists should probably be pretty niche in a langauge like Rust. 10:39:42 oh yeah, non-intrusive 10:40:06 admittedly intrusive would be _even_ harder to write in rust than in C++, and that you can call a genuine drawback. 10:41:29 but since you can have algebraic types generically parametrized by types, you can use non-intrusive lists in a reasonable way, generic over their content type 10:41:30 wob_jonas: I think you might have hit the nail on the head with the conflicting expectations thing. 10:42:33 I can try to dig up the article if anyone is really interested. 10:42:48 as for the community, you're on the internet, you have to prepare yourself to evangelists that seriously try to tell you overstatements, such as that rust can do anything that C can do (rust is pretty close for most practical applications, but it's not entirely true), or that you should always use safe rust because you can do anything in safe rust, 10:42:59 But it was almost certainly referring to intrusive linked lists, yeah. 10:43:01 etc. 10:44:07 cpressey: I suspect that statement is referring to non-intrusive linked list with a hypothetical interface that is safe but lets you do all unsafe mutating operations safely, like somehow magically figure out that you aren't using a pointer to a freed node. 10:44:31 it's not possible to have such an interface to arrays either, neither in rust, nor in any language. 10:44:36 Found it. https://rust-unofficial.github.io/too-many-lists/ 10:45:03 I think it has changed slightly since I first read it 10:45:24 "Just so we're totally 100% clear: I hate linked lists. With a passion." 10:45:29 That hasn't changed 10:46:00 oh, and another lie you shouldn't believe is that you can have any abstraction for zero runtime cost. again there rust gets you closer than many languages, perhaps better in some areas than C++ and worse in others. 10:46:10 Okay, note to self: stop reading things on the Internet 10:46:17 no no 10:46:24 you should read them, just don't believe them 10:46:31 Only scans of compsci papers from before 2002 10:46:43 Why read things that are mostly nonsense and written by nonsense people and will make you sad? 10:47:03 shachaf: to find the few gems that aren't mostly nonsense 10:47:19 Why don't you find those gems, and email them to me. 10:47:25 Then I won't have to read anything else. 10:47:34 -!- MDude has joined. 10:48:50 shachaf: because you shouldn't believe what I say too about which ones are gems 10:49:29 Of course. I also don't believe that I should read them. 10:50:06 I do sometimes link stuff on irc, but most of those are nonsense too 10:50:31 One of the Rust things that seems to be an attitude is that the experts will write data structures or whatever, and they'll use "unsafe" and so on, but then regular humans like you and me won't do that. 10:50:52 I feel like that's not so great in a systems language, where often your thing shouldn't be decomposed into a bunch of prepackaged data structures. 10:51:13 Instead your whole program is the fancy data structure, and you're the expert. 10:51:25 Maybe I'm overstating it. 10:51:49 I feel this would all make more sense if the thing had a specification 10:52:00 shachaf: no no. if you want to have runtime efficiency, you should use unsafe, but learn its rules. if you want to write high-level programs like you write in safe haskell or perl and mostly don't care about efficiency, then don't use unsafe. and you can use unsafe only for the loops that you have to optimize. 10:52:08 Because without a specification how can I write a proof that it is safe? 10:52:31 Where do I refer to for the definitions of the things in my proof? 10:52:46 And if I have no proof, on what basis do I say it's safe? 10:52:56 cpressey: yes, that's a bad part, it's hardly documented, they haven't quite figured out all the unsafe rules yet. mind you, people can't figure out all the rules of memory access in C either, not even for single-threaded. 10:53:30 there's some documentation, but it's often unclear, because nobody likes to write documentation, so sometimes you have to ask on irc for people who understand the thing to clarify. 11:00:08 To be clear, I do like many of the ideas Rust is based on, and I think it's probably a better choice than C++ for a lot of projects (although that's partly because I detest C++). 11:01:09 I'd prefer a language that's a better choice than C. 11:02:21 cpressey: I haven't read that whole article, but it seems bizarre to me that doubly linked lists would be a problem because they want each node owns the next node or something. 11:02:23 I often use high-level languages and I rarely need to care about performance. When I do need to care about performance, it is rarely at the level where using unsafe operations would make a difference. 11:02:38 I imagine that instead you'd have some other thing that owns every node in the list. 11:03:00 I think the style of writing C and C++ where pointers are conflated with ownership isn't that great. 11:06:08 shachaf: I'm not sure how much Rust improves over C++ in that regard. I've had it described to me that Rust works this way because it implements RAII "properly" (implication being that C++ does not). 11:06:57 I would rather just have a garbage collector and not have to worry about my objects having a deep understanding of how to acquire what they need when they are allocated 11:07:29 Also this person is saying odd things like "a bunch of pieces of data on the heap (hush, kernel people!)" as if the heap isn't a fake idea. 11:08:06 I mean: Linked list nodes can be wherever I want them to, that's the point. 11:08:34 cpressey: Oh, well, it sounds like you could use almost any programming language made in the last few decades. Why Rust? 11:08:46 cpressey: rust does improve on the C++ rules in that it allows better low-level runtime optimizations. if you don't want that, then sure, allocate everything separately and garbage-collect or reference count everything. sometimes that is worth, and you can have at least the reference counting reasonably in rust, probably could do a proper gc too th 11:08:46 ough I'm not sure about that. 11:08:48 shachaf: Why indeed? 11:09:36 You can do the reference count everything stuff in rust about as well as in modern C++. 11:10:52 Hmm, my conclusion about this article is that it's not worth reading. 11:15:58 Alright, alright, so Rust is what it is. What I really find obnoxious is hype and bad evangelism. Whether Rust evangelism is worse than other language evangelism, I can't really say. I like Haskell (mostly) and I've seen some bad Haskell evangelism too. 11:16:30 Rust evangelism is pretty bad and annoying. 11:17:15 So is Haskell evangelism. 11:35:12 I don't dislike Julia. I haven't really seen evangelism for it. Maybe because it has an obvious nemesis (R) towards which such chest-thumping can be directed...? 11:36:16 I have very little interest in numerical computing, but even outside of that Julia seems just fine as a general-purpose programming language. 11:48:02 And then I wonder if I should learn LLVM or Liquid Haskell and then I wonder if it's an academic question because I have no time to write anything of interest in them anyway. 11:48:31 -!- iconmaster_ has quit (Ping timeout: 276 seconds). 11:52:34 -!- iconmaster has joined. 11:58:54 cpressey: hmm. I barely know anything about Julia. I should look it up a bit if you say it's a fine general-purpose language. I might not like it, but I at least want to be a little familiar about these. 11:59:17 -!- budonyc has quit (Ping timeout: 244 seconds). 12:01:02 wob_jonas: It's definitely worth looking at, IMO. Some of the approaches it takes are interesting. 12:02:39 cpressey: as for rust, even though people complain about the community, that it has a community on irc has helped me a lot. I could ask questions and get specific answers, and understand rust better that way. I could mostly ignore the evangelism, since that doesn't come through if I ask specific technical stuff. 12:03:08 this isn't exclusive to rust of course, but it helped a lot with rust because of its ... unsatisfying documentation 12:03:58 or let's say, the documentation is slow, but probably not really that slow if you compare it to languages like C or C++ 12:14:41 A lot of the bad evangelism I've seen (for anything) has come by way of social media. I think that's probably a factor. (I'm not on social media anymore) 12:15:03 IRC I'm not on regularly, but I seem to come back to visit every 3 to 5 years 12:15:54 I seem to have rust installed on this laptop, but not configured correctly. I don't remember when/why this happened. 12:16:04 that is quite likely. I don't use facebook, and rarely use twitter. 12:21:27 Another thing about Rust that made it hard to like was how much, and how frequently, the language changes. I wrote this 5 years ago, I wonder if it still passes: https://github.com/catseye/Dipple/blob/master/rust/Tests.markdown 12:26:35 I will need to try to unbreak my rust installation to see 12:27:52 cpressey: yes, it does change a bit. they do care somewhat about compatibility of course. 12:28:58 cpressey: does that do anything fancy that would change? I'd guess it should still pass, though of course the specific error messages would change. 12:29:21 but I haven't tried to run it 12:31:22 ah no 12:31:31 the parts using the tilde box syntax would fail 12:31:44 that changed, but I think it changed before rust version 1 12:31:50 it's not a recent change by any means 12:32:15 the language hasn't changed that much since version 1 in incompatible ways 12:38:43 Version 1 was in 2015 I think. I don't know when the tilde box syntax disappeared. 12:42:14 -!- xkapastel has joined. 12:48:01 Good to see that it's slowed down a little in recent years. There are "editions" of the language, now? Okay... 12:48:58 -!- iconmaster has quit (Ping timeout: 276 seconds). 12:49:43 -!- iconmaster_ has joined. 12:50:23 cpressey: two editions with some syntax differences, and they're direct call compatible, you can have one compilation unit be in one edition and another one in another, and directly call functions or use types across them, because the type system and runtime is the same, anything you write in one edition still makes sense in the other. 12:51:08 and since this isn't C vs C++, you don't have to write headers in both syntaxes (unless you have circular dependencies), the compiler can read the interface of a compilation unit from the object file, just like in haskell. 12:51:41 they're still working about some problems with macros, since those work at a level very close to the syntax, and it's a bit hard to mix the two syntaxes in the same statement, needs more compiler magic. 12:52:46 basically the compiler has to tag identifiers with which edition source they come from, so that it can resolve the name properly according to the rules of either the original edition or the 2018 edition. 12:54:23 also the differences between the editions is documented. you use the --edition=2018 switch of the compiler to tell that a file is written in the newer edition; sadly there's no declaration you can put right in the source file to do that. 12:54:56 if you think of the C and C++ magic compilation flags, this won't sound too bad. 12:55:23 Almost none of this sounds too bad compared to C++ 12:55:27 and I only say it's slowed down for _incompatible_ changes 12:55:45 they added a lot of (mostly) compatible stuff, as in, new features or new library functions 12:56:16 but yes, there are incompatible changes, even recently, ones that have nothing to do with editions 12:56:59 really the editions change only a very few rules, the ones where it makes sense to do the change the way I told above. the incompatible stuff, that you just have to edit if your code triggers it. obviously they try to do it only when it doesn't impact much real code or is easy to fix. 12:57:44 and they try to make it so that the compiler error messages and warnings can help you recognize what broke from incompatible changes, or what will break from planned future incompatible changes 13:04:23 -!- cpressey has quit (Ping timeout: 244 seconds). 13:14:52 hm it seems if I will design a sufficiently complex language, I better add a required “version flavor” directive to its grammar so as to warn users if there is e. g. a breaking change in syntax→semantics but the old syntax is still valid 13:18:56 when I was in school, I had a wrong impression (I think because of reading MS JScript documentation on my computer; I had only VB6 and that) that the more sources are correct syntactically and have some meaning, the better 13:19:55 -!- cpressey has joined. 13:20:10 now I find that the opposite is true and should be true. The more restrictions language places on user, the better it could version, the better it could report errors, the better a translator can do 13:21:09 though maybe there could be drawbacks. I’d like to know if there are famous cases when some language was too rigid 13:24:33 also on the same note I think overloading is oversold. A language can have rich function types allowing implicit/optional/keyword/iterable arguments and then there would be less sense to call two functions not mergeable this way by the same name and thinking how to get a reference to one of them without too much syntactic burden (by specifying all argument types?) 13:25:11 (this is about true overloading, not generic functions) 14:21:17 -!- cpressey has quit (Ping timeout: 245 seconds). 14:30:50 -!- ais523 has joined. 14:34:12 -!- cpressey has joined. 14:36:07 cpressey: I've been using Rust for some things, I've been pretty impressed with it so far 14:36:29 I wrote the The Waterfall Model implementation in it, that's probably the largest released program 14:37:24 I doubt it's a perfect language for everything, but I'd favour it for new projects over C (whenever in C I try to do something clever with memory allocation that wouldn't be allowed by Rust, it typically turns out to be incorrect) 14:37:54 how about compared to C++? 14:38:34 -!- trn has quit (Ping timeout: 272 seconds). 14:38:35 I'm unwilling to use C++ for almost anything at all, other than when I need to access linker functionality in a portable way 14:38:58 I guess writing C++ as though it were C is tolerable, but even then, memory allocation is a pain 14:39:16 (I do like many C++ features, like string const-correctness, but gcc lets you request that for C too) 14:40:31 I see 14:41:58 modern C++ though has most of the drawbacks of Rust without many of the benefits 14:42:26 if you're using std::move and std::unique_ptr and friends, why not use a language actually designed around those concepts (and which enforces them much more comprehensively with better error reporting)? 14:43:17 Rust does make you think hard about memory allocation. Which, for some applications, is a very good thing, because you should think hard about memory allocation, so that you get it right. 14:43:59 For other applications, it's much more productive to not have to think about memory allocation at all. 14:46:42 Often I wonder how much friction programmers generate when discussing languages is simply because of insufficient recognition that different programmers program in different application areas. 14:46:53 most of my Rust programs just seem to come down to making a few `Vec`s and storing values in those 14:47:18 and yes, some languages are definitely better for certain fields 14:48:11 ais523: On a completely different note: are cellular automata Turing-complete or not? Let's settle this 14:48:27 s/are/can ... be/ 14:49:04 cpressey: the field of cellular automata in general? there are cellular automata that can simulate the steps of a running Turing machine starting from a playfield with only finitely many non-blank squares 14:49:28 that seems like something that most people would consider to be Turing-completeness 14:49:50 I think it hinges on whether you allow your reduction to introduce halting or whether you insist it has to preserve halting. 14:50:08 If you allow the reduction to introduce halting, then the set of Turing machines that never halt, is Turing-complete. 14:50:28 Which sounds weird. 14:50:32 you can introduce a "halt colour" into a cellular automaton if you want to, but nobody ever does 14:50:50 Well, then it's arguably a different CA, too. 14:50:52 also, I personally consider the set of Turing machines that don't halt to be TC, you just need to define a different halt state (e.g. a trivial infinite loop) 14:51:16 cpressey: well, yes, but that different CA is still a CA and is Turing-complete, we're just trying to establish whether at least one CA is Turing-complete 14:52:35 Ehm, well, I don't outright reject your view; actually I advocate for a more nuanced view of "Turing-complete". 14:53:17 for me, the main interesting definitional problems with Turing-completeness (other than the fancy L thing) is a) what sort of initial conditions are allowed, and b) what sort of halt states are allowed 14:53:48 I don't think either problem has been resolved, either by me or by the mathematical community as a whole 14:54:35 https://esolangs.org/wiki/Sequential_tag_system is a good example of the type of problems we see; the simplest known "TC" CAs simulate that (rather than cyclic tag), so the question is whether using a cyclic initial condition is permissible 14:54:52 Well, to show something Turing-complete, you need a reduction. Which reductions are allowable and which are not is not universally agreed upon. I think this is not too different from what you are saying. 14:56:46 I think it's generally accepted that it's sufficient for the reduction to be computable and always halt 14:57:15 but a reduction like that can't generate an infinitely long program, which is what cellular automata and Turing machines require 14:58:55 Do they? ... it's interesting that we seem to see different problems with it. 14:59:54 actually I think all the interesting problems in definition of TCness may be special cases of a larger problem 15:00:43 which is "suppose program X generates (possibly infinite) output that is read as input by program Y: what pairs of computational classes (X's computational class, Y's computational class) allow the combined program to be Turing-complete?" 15:02:24 a long time ago I thought it was obvious that either X or Y had to be TC, but now I'm no longer sure 15:02:43 OK, I've managed to confuse myself again already. 15:02:57 You can get pretty far with an "F"SA with infinite states, I imagine 15:02:58 well, there are trivial constructions such as "X produces infinitely many 0s, Y is an LBA", but that seems like cheating in some way 15:03:31 Taneb: how do you even define that, though? presumably you need some sort of rule for how a state behaves, and then that rule becomes the language 15:04:00 ais523: yeah, I think I've not given this enough thought 15:04:44 next you'll say that the infinite mathematicians choosing an arbitrary non-principial ultrafilter during their meeting before the game with red and blue hats seems like cheating too 15:05:27 OK here is my overall feeling. There is a set of allowable reductions for showing Turing-completeness. Some of these reductions are more complex than others. e.g. some are polytime, some are not. 15:06:33 You can consider sets of languages that are only polytime-Turing-complete, or are only polytime-each-other-complete (if you follow what I'm handwavingly saying) 15:07:32 You can investigate Turing-completeness in a sort of fine-grained way. 15:07:59 "polytime-Turing-complete" is only meaningful if you restrict the languages you can reduce /from/ 15:08:30 otherwise, you can just reduce from an exponentially padded language which requires every character in the program to be repeated 2*n times, where n is the original length of the program 15:08:43 sure 15:08:51 Yes, it's intentionally restricted? Maybe I don't follow you 15:09:34 even then, if the language has any equivalent of a string constant, the reduction can just bundle an interpreter with a string constant containing the program you're reducing 15:09:59 so "polytime TCness" pretty much just only determines how data can be stored in a language 15:09:59 ais523: right, and that's why among practical languages, you have very simple reductions like that 15:10:05 Yes - that's usually considered unallowable or uninteresting, but you can define it, at least 15:10:07 although that's interesting in its own right 15:10:33 cpressey: I think disallowing that would be a huge mistake (and, anyway, difficult to define); I'm also not convinced it's always uninteresting 15:10:44 To be fair, "how data can be stored in a language" heavily, heavily characterizes a language, does it not? 15:10:50 right, indeed 15:10:58 so even polynomial time seems a bit of an overkill most of the time, but we want to allow a bit more than just straighforward encoding string literals for languages like incident or (eodermdrone with an infinite alphabet of letters) 15:11:20 I think almost all languages allow a linear-space-in-program data representation, but some don't and are interesting because of that 15:11:40 ais523: not really just how the data can be stored at runtime, more like how it can be represented in the source code 15:12:15 infinite-alphabet-Eodermdrome probably allows data representation in O(n) characters, thus O(n log n) bytes 15:12:20 I suspect Incident is also O(n log n) 15:13:07 (is Eodermdrome TC with any finite alphabet? If so, what's the smallest alphabet such that Eordermdrome on that alphabet is TC?) 15:13:37 Taneb: it's fancy-L-complete with a fairly small alphabet; it has finitely many programs for any finite alphabet 15:13:56 That makes sense 15:14:08 ais523: yeah. you could also imagine a more straightforward language where you have to input everything with alphabetic identifiers, each identifier is globally scoped and can appear at most ten times. you can chain identifiers to make them an alias of each other. you'd have to at least gensym to write code. it's not far from what you can imagine i 15:14:08 n an esolang. 15:14:55 just like with infinite alphabet eodermdrone if you have to encode the letters as longer strings with some delimiters 15:15:19 (like, if it allows combining diacritic characters after the letters) 15:15:19 you can also imagine a language for which programs have the same meaning if anagrammed; that would have a limit on storage density in the source code 15:15:59 I think with that storing n bits of string constant requires O(2**n) program characters, in the limit? 15:16:09 ais523: oh yeah, we talked about that sort of thing, droppable punch card languages 15:16:14 because of the IOCCC ting 15:17:28 though of course with a fixed length of the punch card there's only a fixed number of programs 15:17:44 at most 2**(2**(80*12)) 15:18:28 you can have duplicate punch cards 15:18:42 in the extreme, you can have a language like Lenguage that doesn't look at the content of the program at all, only its length 15:19:00 ah yeah the length-only brainfuck variants 15:27:51 I guess another way to put it is, instead of asking "Is A B-complete?" we can ask "What reductions are required to get from A to B?" And the time complexity, or "introducing halting", or "initial conditions", etc, are all things you can consider. 15:28:33 cpressey: sure, but the answer to that definitely has to depend on what B is, and there's definitely no generic answer. 15:28:53 at least for low powered B 15:29:56 I agree, there is no canonical definition of Turing machines, -- most mathematicians are quite happy to let details slide when it comes to making a point, and this is usually a good thing 15:30:06 um no 15:30:14 that's not what I mean 15:30:19 Wasn't sure 15:30:31 I mean for really differently powered classes 15:31:28 like we said above, you might define Turing-completeness to allow any polynomial time reduction, to study the power of esolangs with very limited formats of IO and source code 15:32:52 defining what you're reducing from is vital for that though 15:33:20 otherwise, you reduce from a language that uses infinitely long programs and thus have infinite time to do so, making even cat polytime TC 15:33:21 The most obvious thing about that seems to be that it disqualifies the languages whose TC proof is a reduction to Tag systems 15:33:37 sometimes we care about the language being able to read arbitrary input from a class, say byte strings, possibly with arbitrary output and interactive IO 15:33:58 or that a language can represent any primitive recursive algorithm with arbitrary non-interactive IO or something 15:34:00 cpressey: I don't see why, tag systems use finite programs and finite input 15:34:13 ais523: right 15:34:18 We seem to be talking about vastly different things 15:34:20 do you mean sequential tag? 15:34:36 cpressey: I got confused, I don't know what I'm talking about 15:34:42 but yes, I'm suspecting a disconnect, e.g. the reductions you're suggesting seem to be backwards 15:34:47 so maybe we're misunderstanding each other 15:34:57 also I'll afk for a bit 15:35:00 -!- wob_jonas has quit (Remote host closed the connection). 15:36:00 I'm treating reduction as "I'm writing a function that compiles language A programs into language B programs, if my function is correct andd sufficiently simple this proves that B is A-complete" 15:37:22 I think it's that by polytime I meant: I'm writing a function that compiles polytime programs in language A into polytime programs in language B 15:37:33 *not* I'm writing a polytime function to do this 15:37:39 oh, I see, a performance-preserving reduction 15:37:46 yes, that's an interesting subject too 15:37:59 although "polytime programs in language A" is often hard to define due to optimising compilers 15:38:11 Tag systems, from what I understand, are not able to do things in poly time, that TMs can do, in poly time 15:38:32 -!- Sgeo|web has joined. 15:38:48 I think they're slower by a factor of the amount of memory in use 15:39:15 -!- Sgeo|web has quit (Remote host closed the connection). 15:39:20 I'm not 100% sure that means that polytime → polytime (with a worse exponent) is possible, but it feels like it would be 15:39:25 At any rate, I think I've seen an objection to e.g. the Rule 110 TC proofs, on that basis: like, yes, rule 110 is technically TC, but not in a "useful way" 15:39:58 ah yes, an O(n**k) time program can use at most O(n**k) memory, so the tag system can simulate it in O(n**2k) time 15:40:04 And that seems like, someone being conservative with a fine-grained conception of Turing-completeness. rather than being liberal with it. 15:40:17 That's all. 15:41:20 there are some languages which seem to have an exponential slowdown 15:41:27 -!- iconmaster__ has joined. 15:41:42 As for infinite programs, that's a whole nother ball of wax, as they say 15:42:03 in particular, I suspect https://esolangs.org/wiki/An_Odd_Rewriting_System may be exponentially slower than some Turing machines at performing the same task 15:42:58 I've just had a dumb esolang idea 15:44:46 Never mind, it doesn't work 15:45:17 (the idea was, you have a list of numbers and execute the program by running the busy beaver corresponding to each number on the tape) 15:45:46 -!- iconmaster_ has quit (Ping timeout: 276 seconds). 15:51:11 The thing I was thinking of was actually from Scott Aaronson's critical review of A New Kind of Science 15:51:34 "However, though Wolframdoes not discuss this explicitly in the book, the known simulations of Turing machines by tag systems require exponential slowdown." 15:51:53 https://arxiv.org/abs/quant-ph/0206089 15:52:36 Now, whether you consider that a problem or not -- that's sort of up to you, I think. 15:55:15 I don't consider that a problem for computational class; however, I also don't agree with the statement 15:55:58 i.e. I think tag systems can simulate Turing machines with only polynomial slowdown 16:03:46 yep: you can compile a Turing machine into a cellular automaton with no slowdown (effectively by using a colour for each TM colour, plus a colour for each (TM colour / TM state) pair, the latter are only used where the tape head is) 16:05:06 and you can compile a cellular automaton into a tag system with quadratic slowdown (via evaluating the entire initialized portion of the tape one character at a time on each step, then adding an extra uninitialized character at each end of the tape so that it doesn't run out) 16:06:18 Well, this is from 2002; at the time, this might not've been established. 16:07:03 The existence (or not) of such languages that are (let's say) only-expontentially-Turing-complete, is an interesting subject to me, at any rate. 16:07:14 if so then 2002 needed better complexity theorists 16:07:25 to me, the construction I just listed is the obvious one 16:07:47 and I can't think of a slower construction except via arbitrarily slowing things down 16:07:50 I know I've had lots of difficulty coming up with languages that provably require high complexity to interpret 16:08:11 (I'm thinking of EXPLOAD) 16:08:34 that doesn't surprise me; AORS's "TC, but very slowly" came as a big surprise when I discovered it 16:12:15 something like https://esolangs.org/wiki/Brainhype works but that's cheating 16:13:22 The weakness seems to be that it's often possible to find a "sufficiently clever" Hashlife-like optimization (and concomittantly difficult to prove that no such optimization exists!) 16:16:23 that said, this conversation gave me an idea for a language but I have no idea if it's even TC 16:16:59 you start with nondeterministic Fractran, specifically the variant in which all commands that can legally run at any given point all run in parallel in separate threads, the program doesn't halt as long as at least one thread is still running 16:17:11 but then you add a rule that a program cannot run unless the program unconditionally halts in the absence of that command 16:18:01 I'm not convinced that this is computable, and I'm /also/ not convinced that it's usable for programming! quite a wide range there 16:20:51 -!- Sgeo has quit (Read error: Connection reset by peer). 16:21:17 -!- Sgeo has joined. 16:21:45 This has definitely been an interesting conversation; I don't have any new ideas but I might be able to organise some of my existing ones more coherently 16:23:01 I might try to understand AORS better; it might be describable in somewhat-more-conventional term-rewriting terms 16:26:48 it's basically a tag system that can't maintain memory beyond a particular point in the queue (that cycles round from start to end) 16:27:06 the definition on Esolang is an attempt to formalize that, the informal intuition can be helpful too though 16:44:27 or, hmm, maybe 2C is the language that can't maintain memory but has no other restrictions, and AORS has an additional restriction? 16:45:26 -!- b_jonas has joined. 16:46:48 -!- FreeFull has joined. 16:48:11 -!- trn has joined. 16:48:30 wb_jonas 16:53:47 -!- Phantom_Hoover has joined. 16:57:16 -!- cpressey has quit (Quit: A la prochaine.). 17:07:59 -!- user24 has joined. 17:10:19 reductions the can turn a polynomial runtime program to a polynomial runtime program are important, and I care about them, in fact I care even about more strict time limits, but people shouldn't try to argue that the "Turing-complete" word refers to anything like that, because I think it's established that it doesn't mean that 17:11:36 a good example is counter machines with a fixed number of counters: they're Turing-complete, but you blow up your runtime exponentially if you translate programs from more reasonable to them, or double-exponential if there are only two counter registers 17:11:38 [[User talk:A]] https://esolangs.org/w/index.php?diff=64642&oldid=64639 * Areallycoolusername * (+3198) 17:12:41 but the whole P class is so robust because we have a lot of natural computation models that are equivalent to each other and blow up by only at most polynomial time 17:12:47 [[User talk:A]] https://esolangs.org/w/index.php?diff=64643&oldid=64642 * Areallycoolusername * (+4) 17:13:07 so it doesn't matter which one you use to define which programs run in polynomial time in the size of input 17:15:03 -!- arseniiv has quit (Ping timeout: 245 seconds). 17:15:25 -!- arseniiv has joined. 17:27:43 ais523, cpressey: that statement in Scott's review is indeed written in a confusing way. as far as I can tell, the problem is that the construction to use that simple cellular automaton can't simulate just any tag system (that would be a strange sort of proof), and the simulation they have does have an exponential blowup, just like with counter machines. 17:28:05 but yes, that's not what the article says 17:28:19 -!- user24 has quit (Quit: Leaving). 17:28:41 we're simulating a turing machine with a cellular automaton with a tag machine, not the other way round 17:28:54 that said, the other direction (simulating a tag machine with a turing machine) is also fairly efficient, IIRC 17:29:14 ah no 17:29:28 apparently it's the cellular automaton that simulates certain tag systems with an exponential blowup 17:29:36 then that statement definitely seems wrong in that draft 17:31:24 oh 17:31:31 a cellular automaton with a tag machine? ok 17:32:00 anyway, the slowdown is in simulating something with that particular cellular automaton 17:32:06 hmm wait 17:32:21 wait, I'm confused 17:33:35 no look, Scott's review clearly seems to say on page 5 to 6 that it's about simulating some kind of tag system with the rule 100 automaton, and simulating a turing machine with the tag system. 17:33:45 he claims that it's the latter that's slow, I think that's wrong but I'm not sure 17:34:16 it's definitely not talking about simulating a cellular automaton with a tag machine there, because Wolfram isn't either 17:34:24 there's an entirely different argument that shows that's wrong: the very first universal turing machine simulated tag machines, that's why they were invented 17:34:33 and people don't complain about a slowdown there 17:34:53 Wolfram was interested on how powerful simple 1-dimensional cellular automatons are 17:35:07 (the naive method has a quadratic slowdown, maybe there are non-naive methods that are faster) 17:35:10 ones with a three cell neighbourhood and two states 17:35:47 and I think the cellular automaton construction has been improved later, after that review 17:36:35 well, Cook's construction for rule 110 simulates sequential tag 17:37:00 and that's also polynomially slower than the Turing machine it's simulating, but only polynomially 17:37:36 yes, but isn't Cook's construction using an exponential blowup in the simulation? 17:37:56 the original one, which was known at the time of Scott's review 17:38:35 it can't be, the reason is because he created cyclic tag from sequential tag via giving it a periodic input 17:38:55 that trick only works if the cellular automaton is linear time with respect to the sequential tag system (otherwise the input wouldn't be periodic) 17:39:20 no, not necessarily 17:39:48 the periodic pattern in the CA can be much more fine-grained then the tag system symbols 17:40:18 well, in this case (I know the general shape of the rule 110 TCness construction, every symbol in the repetitive part is used) 17:40:23 actually, hmm 17:40:27 imagine simulating anything in Brainfuck, where you use a periodic pattern to store multiple tapes on one tape 17:40:33 it could be that the construction moves to the right exponentially fast 17:40:47 with the distance the symbols have to move doubling each time round the tape 17:41:15 so the symbols are sent in linear time but arrive in exponential time because the queue is running away from them 17:41:27 that would be specific to the rule 110 construction, though, not tag systems in general 17:41:28 only Brainfuck is too powerful, it can do the initialization and tape extension all on its own 17:41:50 ais523: yes, something like that 18:13:40 -!- ais523 has quit (Quit: quit). 18:15:47 -!- rain2 has joined. 18:18:20 -!- rain1 has quit (Ping timeout: 272 seconds). 18:21:13 <\oren\> ok this is an issue. I can't remember how my vectorization algorithm works 18:21:34 <\oren\> and I didn't put any comments explaining it 18:26:11 -!- iconmaster_ has joined. 18:30:52 -!- iconmaster__ has quit (Ping timeout: 276 seconds). 18:31:50 <\oren\> Ok I figured out how I did it 18:46:49 <\oren\> Basically it involves keeping track of the vertices between pixels instead of the pixels 19:41:57 -!- xkapastel has quit (Quit: Connection closed for inactivity). 19:51:11 -!- Lord_of_Life_ has joined. 19:54:08 -!- Lord_of_Life has quit (Ping timeout: 268 seconds). 19:54:09 -!- Lord_of_Life_ has changed nick to Lord_of_Life. 20:06:59 [[User talk:A]] https://esolangs.org/w/index.php?diff=64644&oldid=64643 * Areallycoolusername * (+0) /* Collaboration Request */ 20:13:20 -!- xkapastel has joined. 20:27:02 -!- KindOne has changed nick to uplime. 21:29:37 -!- AnotherTest has quit (Ping timeout: 276 seconds). 23:03:11 -!- tromp_ has joined. 23:06:10 -!- tromp has quit (Ping timeout: 248 seconds). 23:13:06 -!- Phantom_Hoover has quit (Ping timeout: 248 seconds). 23:22:33 -!- arseniiv has quit (Ping timeout: 245 seconds). 23:46:17 Woot, I successfully implemented a scan code translation table in the IBM PC simulator! 23:46:25 Now when you type stuff, it shows the stuff that you typed, instead of showing weird garbage! 23:46:46 The first thing I typed was "spinx of black quartz judge my vow". 23:46:48 tswett[m]: nice! does it support the shift and right shift and control keys as modifiers? 23:46:53 Nope. 23:46:58 It doesn't support anything besides letters yet. 23:47:08 do you plan to support them later though? 23:47:11 I typo'd the word "sphinx". 23:47:15 does it support the space bar? 23:47:16 Yeah, eventually. 23:47:30 It kind of supports the space bar, but not on purpose. 23:47:38 good 23:47:40 Anything it doesn't recognize shows up as a black, and it doesn't recognize the space bar... 23:47:43 So yes, it supports the space bar. :D 23:47:48 *as a blank 23:48:18 Also, it doesn't yet recognize that key-down and key-up are different kinds of things; it just interprets a key-up as an unknown character. 23:50:54 Also, programming in machine code is awfully tedious. 23:51:25 my favourite example to type is the Weöres Sándor poem "Munka és béke", because it has at least one of most lower case letters, with like five or six missing, and of most of them appear once in the first stanza and once in the second stanza 23:53:30 It also has two versions published that differ in a few words, and one of the versions misses one fewer letters 23:55:28 Hmmmm, nice. 23:57:35 In any case, one of the next things I'm gonna want to do is implement some Forth-like stuff. 23:57:43 The missing lowercase letters are "ö" plus the rare letters "q x w í ú " 23:58:04 I want this thing to be able to write its own damn machine code. :D 23:58:30 no that's not quite right 23:58:44 The missing lowercase letters in one variant are "ö" plus the rare letters "q x w í ú ű" 23:59:23 but in the other variant, the first line is "Méh-raj duruzsol fák közt, fű alól,", which adds "ö" and "ű" thus making this quite closer to containing all lower case letters 23:59:59 And I prefer to add his name too, so we at least have a "W" there