00:29:30 -!- MDude has joined. 00:42:11 -!- budonyc has quit (Quit: Leaving). 00:55:47 -!- Lykaina has joined. 02:07:21 https://pressron.wordpress.com/2016/08/30/what-we-talk-about-when-we-talk-about-computation/ 02:08:30 this post is interesting, because it presents a dichotomy that I've been struggling with. 02:08:40 that of models of programming vs. models of computation. 02:14:07 -!- lf94 has joined. 02:14:13 Modal the language is a model of programming, while the underlying machine model is a model of computation. 02:15:10 the split between the two is pretty clear; a model of programming requires a validator, compiler or other machinery to function properly prior to performing computations, while a model of computation doesn't. 02:16:20 I don't like models of programming because there's always something to be simplified. some underlying detail that can be exposed, judged and eliminated if found redundant. 02:16:33 you can _build_ models of programming on top of models of computation. 02:17:20 so it's a matter of finding just the right match. 02:35:31 -!- Lykaina has quit (Quit: leaving). 03:44:41 -!- imode has quit (Ping timeout: 246 seconds). 03:57:13 -!- sprocklem has joined. 04:06:25 [[1+]] https://esolangs.org/w/index.php?diff=66328&oldid=66320 * TwilightSparkle * (+87) 04:43:17 -!- Lord_of_Life has quit (Ping timeout: 265 seconds). 04:47:02 -!- Lord_of_Life has joined. 04:53:46 -!- rodgort has quit (Quit: Leaving). 05:01:19 -!- rodgort has joined. 05:08:42 -!- atslash has joined. 05:17:01 -!- atslash has quit (Quit: Leaving). 05:19:28 -!- MrBismuth has quit (Ping timeout: 245 seconds). 05:21:06 -!- atslash has joined. 05:25:45 -!- MrBismuth has joined. 05:29:43 [[1+]] https://esolangs.org/w/index.php?diff=66329&oldid=66328 * TwilightSparkle * (+147) 05:43:40 [[1+]] M https://esolangs.org/w/index.php?diff=66330&oldid=66329 * TwilightSparkle * (+102) /* Turing-Completeness */ 05:49:44 -!- JonoCode9374 has joined. 05:51:34 -!- JonoCode9374 has quit (Remote host closed the connection). 05:56:07 -!- oerjan has quit (Quit: Nite). 08:03:24 -!- cpressey has joined. 08:08:57 Good morning. Interesting conversation here yesterday, but largely a tangent and I never returned to the thing that started it, which was: 08:09:05 It's interesting that it's a version of Rice's theorem that holds for programs that are guaranteed to halt. 08:09:49 Rice's theorem states that all nontrivial properties of the RE sets are undeciable. There are gobs and gobs of nontrivial properties of recursive functions that are decidable. 08:12:59 Rice's theorem doesn't say anything about sets other than SE sets, and neither, as far as I can tell, does "Rice's theorem for the computable reals". 08:13:07 s/SE/RE/ 08:13:37 Nor would I expect "Rice's theorem for the computable reals" to say anything about sets other than the RE sets, because the computable reals are an RE set. 08:43:47 -!- j-bot has quit (Ping timeout: 245 seconds). 08:43:51 -!- HackEso has quit (Remote host closed the connection). 08:43:58 -!- HackEso has joined. 11:08:43 cpressey: No, I certainly wasn't suggesting that it's a direct consequence of Rice's theorem. 11:10:05 Rice's theorem is a connectness theorem, and so is "Rice's theorem for the computable reals". 11:14:45 Actually I'm not sure what you're saying here. 11:15:23 I'm saying "Rice's theorem for the computable reals" is not " a version of Rice's theorem that holds for programs that are guaranteed to halt." 11:16:02 Rice's theorem is a statement about the RE sets. So is "Rice's theorem for the computable reals". 11:17:35 They don't say anything useful when you apply them to a more specific set, like R. 11:18:52 By "a version of", I mean "it makes the same statement about a different thing". 11:19:19 The computable reals are an RE set. 11:19:32 It's not an essentially different thing. 11:19:55 Wait, what do you mean? Of course it is. 11:20:01 Explain how. 11:20:09 No, wait. 11:20:12 This is the tangent. 11:20:26 This has nothing to do with machines that always halt. 11:20:43 You don't have a version of Rice's theorem that says something about machines that always halt. 11:21:03 A version of Rice's theorem that holds for [a special set of] programs that are guaranteed to halt. 11:21:16 Or equivalence classes thereof. 11:21:32 In brief, what does this theorem say? 11:21:40 Which one? 11:21:48 "A version of Rice's theorem that holds for [a special set of] programs that are guaranteed to halt." 11:22:07 I think I was just talking about Rice's theorem for the computable reals at that time. 11:22:18 What does this theorem you refer to, say? 11:22:23 A computable real is a program : Q+ -> Q 11:22:53 Such that two computable reals that converge to the same value are considered equivalent. 11:23:25 "Rice's theorem for the computable reals" says that, given a computable real, no nontrivial property of the real number it represents is decidable. 11:24:00 "The real number it represents" is not a guaranteed-to-halt function. 11:24:20 What? Functions don't halt. 11:24:23 shachaf: that statement doesn't feel quite right 11:24:32 And real numbers aren't functions. 11:24:43 "The real number it represents" is not a guaranteed-to-halt program. <--- better? 11:24:51 No, real numbers aren't programs either. 11:25:09 You literally just said they were. 11:25:13 shachaf | A computable real is a program : Q+ -> Q 11:25:35 OK, that's probably the thing Taneb was objecting to. 11:25:42 I'm on the cusp of concluding you're arguing in bad faith. 11:25:55 Yikes. Let me try to be more precise. 11:26:12 in what sense is a program from Q+ to Q a real? 11:26:40 Given a series of approximations, find a closer approximation? 11:26:45 A computable real is a special kind of real number. 11:27:23 A "program representing a computable real" -- which I should have a shorter name for -- is a program that can approximate it to any precision you ask for. 11:28:10 then what is the Q+ ? 11:28:18 Positive rationals. 11:28:29 We can know a lot more about that program, that we can know about the computable real that it approximates finite approximations of. You seem to be conflating these two objects. 11:28:37 s/that/than/ 11:29:11 Yes, the theorem is definitely about properties of the real represented by the program, not the program itself. 11:29:19 This is also the case with the regular Rice's theorem. 11:30:39 If r is a computable real, it has a bunch of programs representing it. Say f is one. For any positive rational error bound e, |f(e) - r| < e 11:30:58 "If P is a function from the computable reals to booleans, then P is either trivial or undecidable", is this a correct statement of the theorem? 11:31:35 Sure. 11:31:43 (Where I guess "undecidable" means "uncomputable".) 11:32:06 Hmm, I should be more careful since I'm trying to be precise. 11:32:10 shachaf: For "This is also the case with the regular Rice's theorem", could i ask you to actually restate "the theorem is definitely about properties of the real represented by the program, not the program itself" in terms of regular Rice's theorem. 11:32:42 -!- j-bot has joined. 11:33:07 Taneb: A function takes an argument which is a computer program. It can decide plenty of those things about those programs. If we insist that it returns the same value for the same computabler real, whatever program we use to represent it, then it's uncomputable. 11:33:26 i wld agree with that 11:33:34 Yes, I agree with that too 11:33:36 we cannot decide if a computable real is positive 11:34:18 We cannot decide programatically in general, certainly 11:34:47 We can fairly easily decide whether 4, which is a computable real, is positive or not 11:36:40 -!- xkapastel has joined. 11:36:43 cpressey: The regular Rice's theorem says the same thing. 11:36:57 cpressey: https://en.wikipedia.org/wiki/Rice%27s_theorem calls it "semantic properties" 11:38:16 shachaf: So what does any of this have to do with always-halting programs? 11:38:41 Only that representatives of computable reals always halt. 11:39:08 I think I phrased it in that particular way as a leadup to something else which maybe I didn't say. 11:39:10 What does Rice's theorem tell you about these representatives? 11:39:20 Absolutely nothing 11:39:27 Taneb: I agree. 11:39:53 Which is why I find "It's interesting that it's a version of Rice's theorem that holds for programs that are guaranteed to halt." so misleading. 11:41:24 The programs representing computable reals must halt by any reasonable definition 11:41:36 (or be productive, but that doesn't really work with TMs) 11:41:38 "it's a version of" doesn't mean "it's a consequence of" 11:42:08 "It's a version of" means it can be stated as a seperate theorem. Can you state it? 11:42:38 I'd love to hear this version of Rice's theorem that says interesting things about programs that are guaranteed to halt. 11:42:43 Rice's theorem for computable reals? 11:42:50 It's been stated multiple times in the past half hour 11:43:06 Rice's theorem for the representatives of computable reals. 11:43:20 No such thing, just like there's no Rice's theorem for Turing machines 11:43:39 Taneb: I suppose it's too much to hope to hear shachaf agree that there's no such thing? 11:44:20 I feel like Taneb is mostly in completely agreement with me here. 11:44:44 At least I feel like I'm in complete agreement with Taneb. 11:44:48 shachaf: certainly in the broad strokes 11:45:03 Taneb: Every last detail, no exceptions! 11:45:31 Okay, well. Is the nature of my objection still unclear? 11:45:43 cpressey: If we call the regular one "Rice's theorem for partial functions", then we can call this one "Rice's theorem for the computable reals". 11:46:13 In which case, sure, "Rice's theorem for Turing machines" and "Rice's theorem for representatives of computable reals" don't exist? 11:46:18 shachaf: I don't care what you call it, I don't care about connectedness, I'm trying to say it has no bearing on always-halting programs. 11:46:33 This seems like a silly distinction to make, because Rice's theorem is about both Turing machines and partial functions. 11:47:12 OK, I really don't understand what you're saying. Representatives of computable reals are a special set of programs that always halt. 11:47:25 Yes, I know. 11:47:55 This theorem says, given one of these programs, what can you decide about the thing it represents? 11:48:02 Or rather it says you can't decide anything. 11:48:45 I feel like you must be reading something more into what I said before but I'm not sure what. 11:49:12 I guess I'm pretty bad at explaining what I'm objecting to. 11:49:19 Representatives of computable reals are programs that always halt, and this theorem has a bearing on them. That seems clear and uninteresting so it's probably not what you mean. 11:50:25 I guess it must not be. 11:51:48 Suppose I have a program, call it P. 11:51:56 There's a separate question that I had the other day, which is: Given a Turing machine that's guaranteed to halt, what properties are decidable about the function it represents? 11:52:08 Suppose P is known to always halt. But suppose that I don't know what P is supposed to do. 11:52:17 But that question had nothing to do with the reals. 11:52:56 And also nothing to do with Rice's theorem. 11:53:29 Lots and lots of nontrivial properties are decidable for a machine that's guaranteed to halt. 11:53:47 "Does it accept '123'?" Well, just run it and see? 11:54:13 I certainly didn't have the impression you thought of this as a "separate question". 11:54:39 It's interesting that it's a version of Rice's theorem that holds for programs that are guaranteed to halt. 11:54:45 What are other things you can say about programs that you know some things about? 11:54:51 For example what properties of a total function are decidable given a guaranteed-to-halt program that computes it? 11:57:15 Yes, those are separate questions. 11:57:46 What I was thinking when I said that was: We know Rice's theorem, which is about Turing machines we don't know anything about. But what about the large universe of possible theorems about what's decidable if we know some things? 11:58:05 properties to that can decided by finite support 11:58:23 (remove "to") 11:58:51 Yes, that sounds right. 12:00:32 `olist 1181 12:00:33 olist 1181: shachaf oerjan Sgeo FireFly boily nortti b_jonas 12:00:37 Okay, my confusion was regarding how closely-related you were proposing those statements to be. 12:02:06 Or questions, rather 12:02:22 This computable real theorem is certainly not about all Turing machines that always halt. And it's not about them representing the same kind of thing, either. 12:02:57 So a few offhands sentences 24 hours ago were written confusingly. 12:03:03 But I don't think that's bad faith. 12:07:47 So you and Taneb both used the word "productive", and I suspect that's how I think of computable reals -- and definitely not as being represented by halting approximation functions -- which probably attributes to some of the confusion too. 12:08:12 Oh, you'll object to "halting function" again now won't you 12:08:56 I wasn't going to, but if you're being precise, I don't think a function can halt or not halt (though a partial function can be defined or not defined at a point). 12:09:17 Well, I can't call them "total functions" because the total functional programming people decided that includes things that map things to bottom 12:09:37 What? No they haven't. 12:11:12 That was kind of supposed to be a joke. 12:11:22 Ah. 12:11:34 Anyway, I don't think there's a big distinction between the "productive" representation and the one I used, except Turing machines aren't usually very good at expressing productivity. 12:12:07 I want to add one important distinction, which is that a "productive" machine can't go back and change its mind an unbounded time after it's outputted something. 12:12:52 I was at no point considering an unretractable output tape - just the regular tape of a TM. That's how I learned the definition of computable number, that's all. 12:12:58 I think you permitted that with your decimal expansion representation the other day, and it doesn't work. 12:13:21 If your tape is completely retractable, then you can't get any information at all about the real number in any finite amount of time. 12:13:27 That seems pretty bad. 12:14:16 That's what they call "uncomputable", yes. But at the same time, you don't have to have a strong guarantee about when and where a cell of the tape will never be overwritten. 12:14:36 Just that it will eventually not be overwritten again. 12:14:56 "Revised" might be a better word even. 12:15:50 I want the computable real to give me definite information about the real in finite time. I think that's a pretty reasonable request of a program. 12:15:51 As I said yesterday: this is basically Turing's original formulation; apparently it's subtly wrong? 12:16:14 Finite, but not necessarily known beforehand how much. 12:16:22 Well, it's wrong in the sense that it doesn't agree with the modern definition, and it's not very useful. 12:20:32 It's easy enough to fix it to work: You just insist that you can't rewrite the number "too much". 12:20:59 In particular, when you write the nth bit, you can't change the number you've already written by more than 2^-n. 12:22:45 Yes, I don't disagree with this. 12:22:52 Something like this. 12:23:43 Maybe you could say, Every finite prefix of a computable real is computable in finite time. 12:24:10 That leads into having a halting approximation program. 12:25:33 Well, reals don't have a unique prefix, is what I'm trying to say in the first place. 12:26:39 If you can't eventually nail down the first digit of a computable real, then it's not computable. 12:26:52 What's the first digit of 0.333... + 0.666...? 12:27:30 Oh, "eventually". That doesn't sound very computable to me, if it's not bounded. 12:28:22 But you're okay with "productive". 12:28:36 To me they're not different? 12:29:10 I'm OK with "productive" but not "a productive stream of digits". 12:30:03 Exactly because you can't always nail down even the first digit. 12:31:50 The sum of 0.333... + 0.666... is 0.(3,6)(3,6)(3,6)..., so the first digit to the right of the decimal point is (3,6). 12:32:42 What's that? 12:35:16 You wouldn't like it. It's got something to do with a productive stream of digits. 12:36:17 What is 0.333... if not a productive stream of digits? And you're asking me to *add* these objects which you apparently do not accept. 12:36:44 Yes, I'm bringing up this object to show the problem with the stream of digits representation. 12:36:59 But maybe you're still talking about the version that supports rewrites, in which case it works fine. 12:38:35 I don't have any problem with 0.333... + 0.666... = 0.999..., for what it's worth. 12:39:25 [[1+]] M https://esolangs.org/w/index.php?diff=66331&oldid=66330 * TwilightSparkle * (+2) 12:40:10 Is the fact that 1.000... and 0.9999... represent the same number, an actual problem? 12:40:38 0.(3,6)(3,6)(3,6)... can represent that number too, as far as I can see? 12:41:49 For 1.111... - 0.111... I might nail down the first few digits as 1.000... but regardless of my choice of which to nail down, it's the same number? 12:43:11 I don't understand your notation. 12:43:39 The problem is that if the trillionth digit is a 7, you might have to go back and change the 0. to a 1.. 12:58:10 Sure, so maybe you can't nail it down until you've passed the trillionth digit. But there is some point at which you can nail it down. That's what I mean by "eventually". If there is no such point, then it's not computable. 12:59:28 But you don't know what that point is. 12:59:37 That's what I want out of "computable". 12:59:49 (And also the standard definition.) 13:00:16 (I should be more careful with saying things like that parenthetical.) 13:00:38 But if it's a computable real, you know that the point exists, at some finite time after you start. Thus: Every finite prefix of a computable real is computable in finite time. 13:01:40 Is 0.9 a prefix of 1.001? 13:01:49 Oh, I misread. 13:10:38 Anyway that's why I don't have a problem with writing digits to a TM's tape. You get the 1st one in finite time, you get the 2nd one in finite time, and so forth. Maybe this isn't the best way, maybe it's what you and/or Taneb mean by "TMs are bad at that", but I don't see how it's invalid. 13:11:08 You can't report when you have a digit 13:15:52 -!- arseniiv has joined. 13:16:15 I'm fine with that, and also I'm fine with, if you do want such a report, you can use some other representation. 13:40:58 imode: re. models of computation vs. models of programming: an interesting thing though I’m not sure this is how it is? 13:51:56 arseniiv: why do you think that? 13:52:19 I think there is a clear distinction, and if we were to program in assembly, our model of programming directly maps to the model of computation 13:54:01 lf94: I haven’t read to the end yet 13:59:05 cpressey: This is the distinction between "Cauchy sequences" (normally used for the reals) and "quickly-converging Cauchy sequences" (normally used for the computable reals). 13:59:54 I think by "TMs are bad at that" Taneb just meant that there's no standard mechanism for TMs to produce intermediate output. But you can make one up easily enough. 14:00:13 Anyway quickly-converging Cauchy sequences have a guaranteed rate of convergence, which makes them useful for computation. 14:02:45 re. that post: hm clearly Minsky machines compute (partial) functions N^m → N^n. These functions of course aren’t that sophisticated but… I don’t see a clear border 14:06:02 still that distinction has some merit 14:12:36 > All those representations pass Bauer’s first condition — they are directly useful for simulating computations — but they differ widely with respect to the second test, namely the complexity of deciding the language of the representation. => aah 14:12:37 :1:38: error: lexical error at character 's' 14:12:52 oops I did it again 14:24:45 -!- Sgeo_ has joined. 14:28:07 -!- Sgeo has quit (Ping timeout: 265 seconds). 14:29:28 -!- sprocklem has quit (Ping timeout: 245 seconds). 14:33:47 -!- arseniiv_ has joined. 14:36:02 -!- arseniiv has quit (Ping timeout: 240 seconds). 14:56:24 -!- xkapastel has quit (Quit: Connection closed for inactivity). 15:17:09 -!- imode has joined. 15:31:35 arseniiv_: re: that classification, yeah it's incredibly useful. it allows us to actually classify different formal systems. 15:31:56 in a more productive way than just feeling around. 15:33:32 imode: could you modify modal so I can pipe content to it 15:33:45 I would like to write my modal code in vim, and then pipe this content to modal 15:34:04 I guess I can just call it... 15:34:11 lf94: sure, I can do that. 15:34:25 Also I don't think the clear command works 15:34:38 IMO clear command is very important actually. 15:34:50 You can dynamically re-arrange statements to your benefit 15:34:57 (As in, clear command should be a fundamental importance) 15:35:10 the clear command works. it just resets everything to defaults. 15:35:24 line 407. 15:35:24 I tried it the other day ... maybe it was on your telnet interpreter... 15:35:31 ok :) 15:36:01 I'll add a command switch to just evaluate a file. 15:38:18 or just give you an alternative interpreter. 15:38:40 you should add a switch for interpreter mode 15:38:46 otherwise, pipe mode 15:38:47 :D 15:39:07 bear in mind, this language/interpreter is not being worked on. 15:42:00 it doesn't need to be :) 15:42:30 This is very exploratory for me 15:45:05 https://hatebin.com/wljbtaonwn here you are. 15:45:22 just run it the same as normal. it'll evaluate your file and exit. 15:45:33 along with printing out the time taken. 15:45:58 to test, just run it with a file containing 1 + 2 15:45:59 or something. 15:50:55 modal is a bad language because it's unpredictable and masks the underlying model of computation while relying heavily on it for its reduction semantics. 15:54:43 lf94: what I was working on _after_ modal is a combinatory interpreter using some of the same evaluation tactics as modal, i.e using a queue. 15:55:15 combinatory logic is confusing even to me, though. not because the underlying model is confusing, but after a certain point, things just look like a mess. 15:55:28 that was about the time I swapped to more machine models. 15:57:23 -!- cpressey has quit (Quit: A la prochaine.). 16:03:26 "it's unpredictable" 16:03:45 eh, that was a misspeaking. its evaluation method is unconventional. 16:03:49 I think it can be made predictable 16:04:01 using certain patterns or maybe modifying the interpreter behavior a tad 16:05:36 a better method imho is to keep the _concept_ of term rewriting around, but look at the base model of computation a little more closely. 16:06:19 this interpreter is juggling two associative arrays, one of which is totally cleared every time a reduction takes place and is only needed for the internal mechanics of reduction. it is also juggling a queue that holds the entire expression. 16:07:14 dropping the reduction semantics just gives you a queue automaton that behaves a lot like a traditional Forth if you add the right operators other than dup, swap, drop, namely 'last'. 16:08:49 the next question is.. is this the right level of abstraction. we've traded something reasonably elegant for a machine model, with instructions executing sequentially, etc. 16:09:49 and better yet, is this the right machine model. 16:12:19 I was thinking maybe assembly with a more modern macro system would be the best. 16:13:06 performant and maintainable code 16:13:25 and you can write a simple interpreter if you keep to a subset of a popular ISA 16:13:53 and then you can write in this assembly language forever 16:14:22 and then you can build more powerful tools on top of this 16:18:12 yeah. if you telnet into imode.tech on port 1024, you can see something I've been working on. 16:18:43 it's a forth-like concatenative language based around queue automata. 16:43:00 -!- Lord_of_Life_ has joined. 16:45:44 -!- Lord_of_Life has quit (Ping timeout: 268 seconds). 16:45:52 -!- Lord_of_Life_ has changed nick to Lord_of_Life. 16:55:05 -!- FreeFull has joined. 17:01:57 -!- Phantom_Hoover has joined. 17:05:11 -!- arseniiv_ has changed nick to arseniiv. 18:03:39 -!- LKoen has joined. 18:34:57 imode: read that article you sent me yesterday. very interestign. 18:35:28 Only further makes me think, why are we not working from machine models 18:36:34 I guess it really _does_ prevent us from higher order thinking 18:37:08 like a register move: ld a, b is /similar/ to a = b; but it's also vastly different 18:37:24 a = b, a and b are very abstract, whereas ld a, b; these are concrete 18:39:31 -!- sprocklem has joined. 18:41:16 I think this is why stack based languages are appealing 18:45:58 (the stack provides a way to have arbitrary variables) 18:54:46 I'm seeing that you can write a forth interpreter/compiler in such a way that it can re-arrange the postfix to become prefix 18:54:49 + 1 2 vs 1 2 + 19:03:28 yeah one needs to read RTL instead of LTR and reverse arguments of all ops 19:04:30 though we can read LTR and use a stack of incomplete applications 19:11:36 I'm just saying we can have a forth where we read LTR 19:35:05 -!- andrewtheircer_ has joined. 19:35:41 hi 19:37:16 lf94: of course 19:37:34 andrewtheircer_: hi 19:37:40 sup 19:37:46 new eso projects? 19:51:59 lf94: a more pure form imo of forth-like langs is ones where a queue is used. because of the universal nature of queue automata (they're turing complete), you can theoretically pack everything from main memory to a call stack onto a single work area. 19:52:14 yes 19:53:22 whether it's a suitable machine model is something I'm investigating. 19:54:33 why wouldnt it be 19:54:51 I'm a sucker for minimalism. 19:55:34 as far as I can tell, the only queue manipulation primitives required for a model like mine are duplicate, swap, drop and an operator I'm calling 'last'. 19:55:54 this plus control flow structures and comparisons. 19:56:47 -!- sprocklem has quit (Ping timeout: 265 seconds). 19:56:51 'last' seems to be a dirty hack as well. it takes something from the end of the queue and pushes it to the start. 19:57:03 imode call it peek 19:57:09 oh 19:57:17 3 2 1 last -> 1 3 2 19:57:21 imode: if your queue is circular... 19:57:26 call it rotate 19:57:36 or shift 19:57:39 it's less of a naming thing and more of an issue with semantics. 19:58:09 that's just like a bitwise shift though 19:58:16 that would be a right-shift 19:58:32 left-shift would go the other way 19:58:36 2 1 3 19:58:54 'last' can be implemented in a traditional queue automaton pretty easily: all you have to do is roll through the queue, keeping track of the last logical symbol you encountered. 19:58:55 I think shifting should be a primitive and is definitely not a hack 19:59:18 it takes linear time and uses the state of the automata's FSM to hold that last logical symbol. 19:59:32 what's this about 19:59:37 why not use a linked list 20:00:00 why use one, the interface is the same. 20:10:56 BTW is there a page with computational automata hierarchy? It could be useful for someone 20:11:40 I’d call imode’s one a half-deque automaton 20:12:50 automata 20:13:03 we have pop/push at the front and pop from the back, though one can factor dup, drop, swap and last through different set of operations 20:13:25 andrewtheircer_: plural of “automaton” :) 20:13:35 i was saying 20:13:38 automata is the topic 20:13:39 nice 20:13:43 i hope we can get to CAs 20:13:54 'cos i have made a metaphor out of them 20:14:41 there was a discussion of CAs a day earlier 20:15:09 though there wasn’t much and it wasn’t the topic itself 20:19:32 arseniiv: I'd call it a queue automaton with a built-in. 20:20:01 it's easy to construct a machine that performs 'last'. it just takes N*M time. 20:48:37 -!- arseniiv_ has joined. 20:51:37 -!- arseniiv has quit (Ping timeout: 250 seconds). 20:54:20 -!- arseniiv_ has changed nick to arseniiv. 20:59:20 -!- andrewtheircer_ has changed nick to andrewthediscord. 20:59:25 -!- andrewthediscord has changed nick to andrewtheircer. 21:10:12 -!- andrewtheircer has quit (Remote host closed the connection). 21:25:09 -!- Sgeo__ has joined. 21:28:47 -!- Sgeo_ has quit (Ping timeout: 268 seconds). 21:32:12 -!- Lykaina has joined. 21:35:26 imode: in your pythonic C compiler, I saw both s.d and s->d, though you probably saw that already 21:35:43 compiler-to-C, I mean 21:36:23 ah, never mind 21:36:42 yup, the stack is a local. 21:36:58 I missed that there are *s in functions where s->... etc. 21:37:00 my bad 21:37:26 `:` is hard to type 21:37:26 ​/srv/hackeso-code/multibot_cmds/lib/limits: line 5: exec: :`: not found 21:37:37 eh 21:38:04 today I’m dancing on bots’ legs 21:39:33 and `?` and `;` are too even if we take `[…]` as primitive instead of `[` and `]` separately 21:40:20 though it’s obvious it’s me alone who wanted to type this language :D 21:42:39 I mean, for what I know, there’s no sense to encode the fact that `[` and `]` have to be balanced and `?` and `;` should only occur inside `[…]`, in their types 21:43:29 I don’t know if it’s even possible, and as a theoretical interest I’d be glad to see how if it is 21:44:35 and disregarding these syntactic constraints and `:` all of it is of course typable easily 21:45:30 if conservatively 21:46:38 -!- Sgeo_ has joined. 21:46:40 something I have been mulling about in the back of my mind is a graph rewriting language. 21:46:43 e. g. we could allow inside `[…]` a code that grows the stack, but we better disallow that 21:46:57 -!- Lykaina has quit (Quit: leaving). 21:47:11 imode: not like Eoderdrome or what it’s called? 21:47:26 ah, https://esolangs.org/wiki/Eodermdrome 21:48:03 ah′, it has a fixed initial state 21:48:23 probably not what you think 21:49:10 nah, more flexible. 21:49:19 I thought so 21:49:37 -!- Sgeo__ has quit (Ping timeout: 245 seconds). 21:49:54 imode: also am I right this compiler generates a code for `:` without bound checking? 21:50:05 you are correct. 21:51:49 aren’t you afraid somebody will write an OS in this language and make an error using `:`? ::D 21:52:12 if someone chooses this language to write an OS in, I'll be too shocked to consider the security issues. :P 21:52:34 hm I personally am afraid of C programs maybe now 21:52:47 * arseniiv segfaulted 21:52:56 gg. 21:53:20 the graph rewriting language would rewrite undirected, unlabeled graphs. 21:54:59 rules are of the form a1 a2, a3 a4, ..., aN aM -> b1 b2, b3 b4, ... bN bM; 21:54:59 hm it’s occurred to me now someone could do something with hypergraphs too 21:55:04 Sounds like Eodermdrome. 21:55:12 it isn't. 21:56:00 you essentially try to match the graph bit by bit on the left hand side, remove all matched items, and then construct a new subgraph on the right hand side. 21:56:21 how should it connect to the old parts? 21:56:23 here's an explosion: A B -> A B A 21:56:29 err. 21:56:36 A B -> A B, B A 21:57:06 that'd just be an infinite loop because the graphs are undirected. 21:57:27 if you do A B -> A B, B C, C A, you'll generate an ever-expanding loop of vertices. 21:57:37 any variables on the RHS are autogenerated vertices. 21:57:50 any variables that are unused, rather. sorry, cooking dinner. 21:57:54 Then it's an unlabeled multigraph? 21:58:12 no, just an unlabeled graph. multiple edges aren't allowed. 21:58:42 Oh, I didn't read the adjustment to "explosion". 21:59:08 if you do A B -> A B, B C, C A, you'll generate an ever-expanding loop of vertices. => not a simple loop though. It would look like some foam 21:59:24 it would, because it'd match nondeterministically. 21:59:57 it'd be a bunch of loops. small ones to large ones. 22:00:33 in order to make anything actually useful, you need to build two things: labels and directed edges. 22:00:43 it almost tempts me to analyze how their lengths would be distributed but I don’t know asymptotic methods 22:01:35 you can abstract high enough until you can build things like lists and sequential statements. you can construct finite state machines. 22:01:38 and a tape. 22:01:59 sky's the limit after that, tbh. 22:02:14 and a tape. => it was obvious in hindsight :D 22:02:29 Anyway, it *still* sounds quite Eodermdrome-y, disregarding the input/output aspects. But maybe the devil's in the details then. 22:02:36 you can build, for example, a lot of RDF stuff. 22:02:53 fizzie: it's pretty far from eodermdrome because I can construct arbitrary graphs. 22:03:37 you can reconstruct SPARQL using the language. 22:04:50 one thing I dislike about this approach is 1. there's no clear method on storing a graph (many approaches, not all of them ideal), and 2. the rewrite mechanism involves some behind-the-scenes work. 22:07:20 there's no machinery that directly correlates to a graph rewrite language. string rewriting and turing machines/queue machines are pretty much hot swappable. 22:08:18 labeled metavertices and oriented metaedges are an interesting problem, it’s not immediately obvious how to construct them for abstraction not to break when matching them 22:10:05 (also for the sake of fun I’d consider hyperedges. They don’t seem to make things much easier nor much harder) 22:10:18 (I mean, as a primitive) 22:12:01 I see a metavertex as a core which encodes its label and has e. g. a special vertex to which “edge hubs” are connected 22:12:05 hm no 22:12:58 maybe we should delineate parts encoding different things by several graphs not occuring in any other place 22:14:50 like K_n are not hard to match but K_n ⊂ K_(n+1) 22:15:42 K_m,n promise more, K_(m+1),n seems not to be a subgraph of K_m,(n+1) if m ≠ n? 22:18:14 hmm what if you allow n = 0 22:18:48 (which has the odd property that K_(m+1),n doesn't have any vertice with degree m+1) 22:19:04 vertex 22:22:57 yes we should pick m, n large enough 22:29:47 -!- sprocklem has joined. 22:30:47 there's a lot of issues imo with this kind of rewriting scheme, which is why I've strayed away from it. 22:31:01 a lot of times in rewrite languages it's hard to get your code to _not_ do something. 22:31:09 -!- Sgeo__ has joined. 22:31:23 at first let’s encode a vertex labeled n ∈ N as A—(B—)^n where A, B are nice graphs and A has one special vertex and B has two, to link them; and encode an edge → as —C—D—, with former conventions 22:31:24 enter conditional or constrained rewriting 22:31:56 then we need to enforce constraints on A, B, C, D to not allow accidental matches 22:32:55 maybe we need to add “buffer vertices” connecting to a vertex and all the edges incident to it 22:33:34 or maybe they could connect closely 22:34:12 -!- Sgeo_ has quit (Ping timeout: 245 seconds). 22:34:30 is it a good idea? 22:36:42 I think the whole thing is a bad idea. I keep coming back to it to see if it's a good one, but every addition is just compensating for something. 22:37:03 maybe we need to add “buffer vertices” connecting to a vertex and all the edges incident to it => I mean “connecting to a metavertex and all metaedges…” 22:39:09 it certainly seems to not be too effective regarding its implementation nor implementation of something in it 22:39:32 to be not too* 22:39:55 linear media is always gonna win out vs. any kind of graph or tree structure. 22:40:32 hm I wonder now how about undirected trees 22:41:20 but maybe with a fixed finite set of labels 22:42:28 oh we could simulate a tape of directed trees 22:42:56 encode* 22:43:44 couldn’t simulate if subtree patterns aren’t allowed 22:44:08 [[Byter]] M https://esolangs.org/w/index.php?diff=66332&oldid=49879 * PaniniTheDeveloper * (+1) 22:47:14 -!- lambdabot has quit (Remote host closed the connection). 22:50:00 -!- lambdabot has joined. 23:09:40 maybe I should just use a tape instead of a queue. 23:10:32 -!- Phantom_Hoover has quit (Ping timeout: 246 seconds). 23:11:16 'last' is just "move the tape to the left" anyway. 23:11:33 hm. 23:12:51 -!- FreeFull has quit. 23:13:58 use two stacks (a rubber tape) 23:14:07 rubber tape? 23:14:19 you know, stretchable 23:14:26 flexible 23:14:45 ah, that you can just arbitrarily insert under the cursor vs. moving everything over? 23:14:51 yes 23:14:56 yeah I was thinking about that... 23:26:30 -!- LKoen has quit (Remote host closed the connection). 23:27:25 hm you can use ror and rol (one of them is last) and banish pick, as you can decrement the index and shift it further until it’s zero 23:28:07 though you’ll need to shift an additional copy of the unchanged index to return back 23:29:18 imode: ^ if that’s of any use 23:29:56 one of them is last => s/is/formerly known as 23:30:11 you know, it's funny. 23:30:52 https://hatebin.com/ysmcogarab 23:31:00 pretty much what I did. lol. 23:32:05 hm seems pretty long 23:32:56 though I don’t remember all the primitives, what they are this time? 23:33:30 dup, drop, swap, last, begin ([), repeat (]), if/while (?)... 23:34:12 ah I mis-eyed or something, then 23:34:21 there are definitions at the top 23:34:37 yup! 23:36:57 (hm is sswap really swapping two top values and then getting two from the bottom? why is it useful?) 23:38:24 I think I greatly misunderstand now, but maybe I’ll get it soon 23:39:42 sswap behaves like forth's swap. 23:40:08 swap grabs the top two values, then enqueues both of them but swapped. 23:40:15 aaaah 23:40:20 so grabbing the last two values means they're pushed to the front of the queue again. 23:40:30 1 2 3 sswap -> 2 1 3 23:40:42 vs. 1 2 3 swap -> 3 2 1 23:40:49 and 1 2 dup -> 2 2 1? 23:41:47 I was thinking today why you call it a queue automaton and didn’t even consider I should reinterpret what swap and dup are doing here 23:42:49 1 2 dup -> 1 2 1 23:42:58 you could also interpret it as 1 2 dup -> 2 1 1 23:43:08 what how 23:43:22 hm 23:43:26 examine the top of the queue (leftmost item) 23:43:35 enqueue a copy of it. 23:43:37 I didn’t read “1 2 3 sswap -> 2 1 3” properly 23:43:52 I thought you take from the right end 23:43:56 ahh, sorry. 23:44:02 my convention is always from the left. 23:44:16 I should have declare mine too 23:44:24 declared* 23:44:51 now I think it would go smoothly 23:47:05 telnet into imode.tech, port 1024 to access a REPL. 23:47:51 -!- LKoen has joined. 23:48:05 what's interesting is that you can define `factorial` as `: factorial range product ;` 23:51:43 -!- oerjan has joined. 23:56:22 -!- LKoen has quit (Quit: “It’s only logical. First you learn to talk, then you learn to think. Too bad it’s not the other way round.”).