00:05:04 hm I tried to abuse PuTTY for telnet to no result 00:18:07 my REPL hung up on you? 00:19:11 I get a terminal window which closes almost instantly 00:19:41 that's... weird. 00:19:50 maybe try netcat instead. 00:21:02 oh now I get ::> but then a dialog box tells me “connection closed by remote host” 00:21:22 ... 00:21:25 bizzare. 00:21:50 no errors on my end. 00:26:37 usually I'd see "broken pipe" or something. 00:34:11 a tape with an insertion feature that can be implemented as a constant time operation is starting to look better than a queue. 00:34:29 can be implemented internally as a circular buffer that can expand on either end. 00:36:10 anyway maybe later I’ll run your old interpreter code if it’s not too old yet 00:37:24 arseniiv: https://hatebin.com/usnmammjft in case you want to run the REPL locally, sorry it's not working remotely. 00:38:54 imode: thanks once more! 00:39:54 np. 00:54:07 imode: roll won’t work for a singleton queue but last presumably will, so… 00:54:08 something I take issue with as well is the concept of "atomization" in the style of bohm. structured languages need validation (matching beginning/ending loops, etc.) while unstructured needs minimal validation (checking that all jumps are within memory bounds). 00:54:16 arseniiv: singleton queue? 00:54:23 a queue with a single element? 00:54:28 precisely 00:54:36 yeah... 00:54:47 you can do : roll dup drop ; 00:55:03 but wouldn’t you make roll a primitive? 00:55:17 considering that circular buffer thing 00:55:18 why not, when you can define it? 00:55:27 ah yeah if I was using a tape, I'd just have left and right. 00:56:21 roll would be 'right' and last would be 'left'. 00:56:38 operands would just look left or right. 00:57:30 argwards and reswards :D 00:57:43 if you were to add two numbers together, for example, it'd "consume" them and insert the result under the cursor. internally, if you wanted to compile this down to something that didn't have insertion/deletion capabilities, you could implement shifting the tape left or right. 00:59:10 you can do : roll dup drop ; => neat! I haven’t checked immediately 00:59:46 because I implemented 'dup' to just examine the head of the queue. 01:00:15 i.e 1 2 dup -> 1 2 1, vs. 1 2 dup -> 2 1 1 01:00:40 if you implemented it to be destructive, you'd do something like : roll dup last drop ; 01:00:40 it makes sense as you take arguments from the left… or right, I take from the right 01:01:01 ah, you mean that distinction 01:01:29 yeah. it's either "look at the head of the queue, see what its value is, enqueue a copy of it without dequeueing it." 01:01:38 or "dequeue an element, enqueue two of that element". 01:01:48 if you use the former, it's just : roll dup drop ; 01:02:02 if it's the latter, it's : roll dup last drop ; 01:02:06 if I had realized queue programming is so much more fun that stack one some time ago 01:02:39 honestly this is more like tape-based programming with insertion and deletion under the cursor. 01:02:43 alas I remember writing two or three half-finished stack concatenative language implementations 01:02:45 but it _is_ hella fun. 01:02:57 same... I used to be a stack evangelist. 01:03:07 I can't justify the juggling and arbitrary stack primitives. 01:03:18 imode: now the tape, yes, it’s still fun and it even acquires additional charm 01:03:34 I had a sketch of a language that was essentially built in two parts. 01:04:02 the underlying state machine description of an operation/machine, and the macro language for stitching together machines. 01:04:23 I never got to a place where I was happy with it. 01:04:45 the macro language looked a lot like forth. 01:05:26 if you placed down a machine, and it was identified to have "anchor points", any machines you placed after that would be "glued" to those anchor points in sequence. 01:06:26 so 1 2 + if 4 5 + ; 6 7 + would reduce to an actual state graph. 01:07:28 hm I just had a non-minimalistic idea: if one is to go with a circular tape to the end, for every operation F there could be a dual operation F* taking each of its arguments from the other end and placing its results likewise, and if you define F, F* is defined automatically as the elementwise * of its body 01:07:38 1 -> 2 -> + -> if/1 -> 4 -> 5 -> + -> ; if/2 -> 6 -> 7 -> + -> halt 01:07:50 arseniiv: _exactly_. 01:08:31 and I still haven’t read the last posts 01:08:32 would definitely want a circular tape. for no other reason than it's confusing when considering boundary conditions. 01:09:03 a bounded tape is just a circular tape with markers. 01:10:38 you _could_ implement a two or one-way infinite tape, though. 01:10:59 just insert a blank. 01:11:59 why did we need an infinite tape yet? 01:12:12 TMs traditionally have an infinite tape. 01:12:24 i.e you can keep going in a direction and you have the memory you need. 01:12:52 so if your goal is to reach the end of the universe and put a bit there, you can build a machine that does that... but it won't halt. 01:12:55 ah, there is a plan to fit a TM in this setting? 01:13:08 well, what you're slowly drilling down to _is_ a turing machine. 01:13:16 /if/ you want to be Turing-complete, you need an infinite number of states (configurations)... the unbounded tape is how TMs achieve that. 01:14:38 the difference here is we're just talking about things built as extensions to TMs. 01:14:45 imode: I don’t want to end up with TM! :P I’m happy with a circular tape with a thing between two cells working on it, it’s better than a marked cell 01:15:35 well, what's a circular tape but a bounded/unbounded tape that has markers signifying where to seek when you wrap around. 01:16:20 of course there is an isomorphism 01:16:29 if your tape looks like [ 1 2 3 4 >5< ], and you move right, you can seek all the way back to [ and move right once. 01:16:45 hm or how is it called when one bounded tape corresponds to a class of unbounded tapes 01:17:09 I agree, of course 01:17:24 point is there's always a way to "drill down". 01:17:40 any operation can be reduced to its appropriate state graph. 01:17:56 though I like the circular presentation, it implies left/right are O(1) 01:18:13 exactly. you can "stop" at a certain set of operations and present them as constant time. 01:18:17 on TM they are linear on our tape length 01:18:18 and impement them as primitives. 01:19:20 what could I say :) 01:20:32 the only issue I have is the command representation as it stands. this queue language is really just some macros that could reduce to a state graph. 01:21:06 I’ll go sleep of circular tapes of nonstandard lengths flying through ℵ₁ parallel universes 01:21:18 hahaha, dream well man. 01:22:08 -!- arseniiv has quit (Quit: gone completely :o). 01:26:50 -!- imode has quit (Ping timeout: 240 seconds). 01:34:07 `icode ℵ₁ 01:34:08 ​[U+2135 ALEF SYMBOL] [U+2081 SUBSCRIPT ONE] 02:10:08 -!- Phantom_Hoover has joined. 02:13:40 -!- Sgeo_ has joined. 02:16:42 -!- Sgeo__ has quit (Ping timeout: 245 seconds). 02:17:38 -!- Phantom_Hoover has quit (Ping timeout: 240 seconds). 02:43:31 -!- imode has joined. 03:51:03 -!- Sgeo has joined. 03:53:38 -!- Sgeo_ has quit (Ping timeout: 240 seconds). 04:09:46 -!- Lykaina has joined. 04:44:35 -!- Lord_of_Life has quit (Ping timeout: 265 seconds). 04:47:40 -!- Lord_of_Life has joined. 04:54:35 [[1+]] M https://esolangs.org/w/index.php?diff=66333&oldid=66331 * TwilightSparkle * (+6) /* Examples */ Hash index starts from 0!! 05:10:51 [[1+]] https://esolangs.org/w/index.php?diff=66334&oldid=66333 * TwilightSparkle * (+149) /* Turing-Completeness */ 05:12:04 [[1+]] https://esolangs.org/w/index.php?diff=66335&oldid=66334 * TwilightSparkle * (+53) /* Turing-Completeness */ 05:19:29 -!- Sgeo has quit (Ping timeout: 276 seconds). 05:20:28 [[1+]] https://esolangs.org/w/index.php?diff=66336&oldid=66335 * TwilightSparkle * (+34) /* Turing-Completeness */ 05:25:53 -!- Lykaina has quit (Quit: leaving). 05:31:53 -!- Frater_EST has joined. 06:00:25 -!- oerjan has quit (Quit: Nite). 06:39:01 -!- Sgeo has joined. 07:05:23 -!- Lord_of_Life has quit (Read error: Connection reset by peer). 07:10:46 -!- Lord_of_Life has joined. 07:42:02 -!- imode has quit (Ping timeout: 268 seconds). 07:45:45 -!- LKoen has joined. 07:50:20 -!- LKoen has quit (Remote host closed the connection). 07:51:20 -!- LKoen has joined. 08:18:01 -!- cpressey has joined. 08:27:47 -!- MrBismuth has quit (Read error: Connection reset by peer). 08:31:36 -!- Phantom_Hoover has joined. 08:38:25 -!- LKoen has quit (Remote host closed the connection). 08:40:37 -!- LKoen has joined. 08:42:19 -!- LKoen has quit (Remote host closed the connection). 08:44:52 -!- LKoen has joined. 08:56:57 -!- LKoen has quit (Remote host closed the connection). 09:03:22 -!- LKoen has joined. 09:25:39 Hah: "As long as theoretical computer scientists can’t even prove basic conjectures like P≠NP or P≠PSPACE [...]" 09:26:02 (from Q8 at https://www.scottaaronson.com/blog/?p=4317) 09:27:41 i once witnessed someone trying to explain to somebody who really didn't get it why P is a subset of PSPACE 09:27:46 quite hillarious 09:28:43 intuitively this is easy... a P algorithm can only touch a polynomial amount of memory (tape, whatever). 09:29:02 Making it formal is probably quite tedious. 09:29:03 yeah, it is easy 09:29:54 (Especially if your model of computation is *not* Turing Machines but something closer to reality like RAM machines) 09:32:00 Whose name is attached to NSPACE(f(x)) = DSPACE(f(x)^2)? 09:32:14 idk, I've always seen RAM machines as just hiding (a polynomial number of) head movements 09:33:09 cpressey: Well personally I find P too coarse; exponents matter in practice. 09:33:47 There's the old joke that theory is more important than practice, except in practice 09:35:28 Savitch (I meant \subseteq instead of =... I kind of forgot to go back and change it.) 09:38:12 What's all this about quantum supremacy now. Oh gosh. Scott Aaronson. I'm not sure I want to read this. 09:39:09 Taneb: i once was told a piece of conversation like "so, how are your studies different from mine" "well, we are doing more theorie: runtime analysis, proofs, ..." "if you proove something, it isn't theory anymore" 09:40:25 I suppose debunking quantum hype is a noble calling, but so is not caring about it. 09:40:42 cpressey: But he does care now. 09:40:42 cpressey: that article is neither 09:41:21 >_> I should stop trying to make jokes. 09:42:08 Sorry, I didn't observe any joke when I collapsed your sentence. 09:47:25 `unidecode ☰𝌁𝌃 09:47:26 ​[U+2630 TRIGRAM FOR HEAVEN] [U+1D301 DIGRAM FOR HEAVENLY EARTH] [U+1D303 DIGRAM FOR EARTHLY HEAVEN] 09:55:06 Is there also a trigram for just plain earth? 09:58:20 ski: ^^ question for you ;-) 10:10:27 `unidecode ☷ 10:10:27 ​[U+2637 TRIGRAM FOR EARTH] 10:20:21 I saw those at home, but at work they're just boxes. :/ 10:20:26 Must be missing some fonts. 10:41:17 -!- Frater_EST has quit (Ping timeout: 245 seconds). 10:41:44 -!- LKoen has quit (Remote host closed the connection). 10:46:14 -!- LKoen has joined. 10:50:54 -!- LKoen has quit (Ping timeout: 246 seconds). 11:21:11 [[Byter]] M https://esolangs.org/w/index.php?diff=66337&oldid=66332 * PaniniTheDeveloper * (+0) 11:46:03 -!- Phantom_Hoover has quit (Ping timeout: 265 seconds). 12:38:37 int-e : also ⌜𝌅⌝ (and apparently also ⌜𝌀⌝ ?) 12:38:54 `unidecode 𝌅𝌀 12:38:55 ​[U+1D305 DIGRAM FOR EARTH] [U+1D300 MONOGRAM FOR EARTH] 12:39:39 "usually associated with human (Chinese ren), rather than earth" (the latter one) 12:41:56 int-e : i was pondering the other day, using these (mono|di|tri|tetra|hexa)-grams as pixels (sortof), wondering why the internal ordering was inconsistent among them, and trying to make some sense of the associations of different meanings to the bits/trits 12:42:41 (perhaps the I Ching would explain some of it ?) 12:43:35 (in particular the internal ordering of the hexagrams is strange) 12:44:28 * ski glances at fizzie 12:49:40 -!- sebbu has quit (Quit: reboot). 12:50:46 ski: hmm it embeds a fancy complementation operation (mirror image, except when the symbol is vertically symmetric; in that case, the boolean complement is taken instead) 12:50:47 * ski just realized what int-e meant by 12:50:53 ski: Now does that mean that shift-o (for example) can produce any of A, B, H or I? 13:15:39 [[Pie]] N https://esolangs.org/w/index.php?oldid=66338 * InfiniteDonuts * (+327) Created page with "==Introduction== Pie is an 2-dimensional esoteric programming language created by [[User:InfiniteDonuts]] that is sort of a [[Befunge]]/[[PATH]] hybrid. Like Befunge, i..." 13:16:48 [[User:InfiniteDonuts]] https://esolangs.org/w/index.php?diff=66339&oldid=66269 * InfiniteDonuts * (+68) 13:21:28 [[Pie]] https://esolangs.org/w/index.php?diff=66340&oldid=66338 * InfiniteDonuts * (+176) 13:23:50 the github link results in a 404 13:26:02 InfiniteDonuts doesn’t have any public repositories yet. 13:26:10 this may be why 13:28:10 well 13:33:35 [[Pie]] https://esolangs.org/w/index.php?diff=66341&oldid=66340 * InfiniteDonuts * (+5) 13:41:39 ski: https://en.wikipedia.org/wiki/King_Wen_sequence somewhat explains the duality (but there's also a lot of mystification surrounding all this) 13:47:27 I wonder about compiling Scheme to Javascript using CPS. JS has poor TCO support so you'd want to make a trampoline. But would you construct a custom trampoline for each program? It sounds like a neat approach regardless of the practical benefits it may or may not have. 13:51:16 I remember borrowing "Compiling with Continuations" from the library years & years ago. I remember liking it but I never got too deep and I don't remember any of the content specifically. 13:53:28 -!- Frater_EST has joined. 13:54:24 I think there's usually an assumption there that you'll be compiling to a machine language that has conventional jumps, though. Compiling to another high-level language with functions (and closures, even), but not TCO, would be putting a different twist on it. 13:55:06 there's always for (;;) { switch (foo) { ... } } 13:55:20 Yeah, that's what I meant by "trampoline". 13:55:44 or stuff like while (next != NULL) { next = (*next)(); } 13:57:50 Great. I now want to write a Scheme compiler that compiles to trampolined code, as if I don't already have enough on my todo list. 13:58:08 -!- Frater_EST has quit (Ping timeout: 246 seconds). 13:58:15 Thanks, personal interests! You're really helpful sometimes. 13:58:31 -!- Frater_EST has joined. 14:04:50 -!- Frater_EST has quit (Ping timeout: 240 seconds). 14:06:36 -!- Frater_EST has joined. 14:06:44 [[Language list]] https://esolangs.org/w/index.php?diff=66342&oldid=66275 * InfiniteDonuts * (+10) /* P */ 14:11:01 -!- Sgeo_ has joined. 14:13:57 -!- Sgeo has quit (Ping timeout: 268 seconds). 14:24:55 -!- Frater_EST has left. 14:31:46 -!- arseniiv has joined. 14:33:03 hi hi 14:34:28 hi arseniiv 14:37:03 I thought up a monoid thinking about types for stack operations. Though I already knew how to encode them with heterogeneous lists, I think a monoid looks more operationally usable. Or at least it looks neat in case we’re concerned only with stack under/overflow and not types of its elements; 14:38:45 [[L]] https://esolangs.org/w/index.php?diff=66343&oldid=53989 * InfiniteDonuts * (+100) /* Hello World */ 14:39:57 [[L]] https://esolangs.org/w/index.php?diff=66344&oldid=66343 * InfiniteDonuts * (+77) 14:41:09 in generic case, it’s a free monoid on t⁺, t⁻ for all types t, factored by t⁺t⁻ = e; in the simplest it’s a free monoid on +, − factored by +− = e. Now the last one looks almost like integers but with a quirk, so I thought maybe we need to name these things numbers too. Here, all balanced strings of + and − equal to e and we end up with a unique presentation of each element as −^m +^n 14:41:36 in stack op terms it means m elements are popped and then n are pushed 14:41:53 we can’t pop from empty stack so −+ is not e 14:42:27 likewise with the general version, and we even may generalize to deques 14:43:55 as two ends of a deque are basically independent from this perspective if we don’t know its length 14:43:56 [[Pie]] https://esolangs.org/w/index.php?diff=66345&oldid=66341 * InfiniteDonuts * (+87) 14:44:53 -!- imode has joined. 14:45:05 now, we could do something like this with checking well-placedness of [ and ] but ? and ; would get in the way 14:45:11 himode 14:47:13 . o O ( now I propose calling those +−-things plumbers as the stack may be viewed as a tube ) 14:48:36 -!- Frater_EST has joined. 14:49:07 allo arseniiv. 14:51:09 whoa, if you link with -fno-plt, ld will turn statically linked calls from indirect jumps to direct jumps. 14:55:09 -!- Frater_EST has quit (Remote host closed the connection). 14:55:42 [[Special:Log/newusers]] create * Lewis * New user account 14:58:06 int-e, ty 14:58:36 arseniiv : hm, reminds me of some kind of bra-ket description of something with regexen ? 14:58:52 (BTW how old is trampoline idea) 14:59:18 perhaps you could find some mention in some Scheme paper 14:59:31 ski: it reminded me about something QM too, creation and annihilation operators should work that way too IIRC 15:01:53 arseniiv : might have some relevant paper about it 15:02:22 er, sorry 15:02:31 cpressey : that ^ was for you 15:05:01 are you still working with that C compiler? 15:06:21 ski: thanks! 15:07:11 arseniiv: the thing I don't like about my current language is [ and ]. it kind of fails the model of computation check because [ and ] have to be balanced for the underlying logic to work. 15:08:33 which means that a PDA or counter machine is required to validate the language. 15:09:28 * ski supposes it was possibly both for arseniiv and cpressey .. 15:12:30 I fail to see how pie is notably different from befunge except it being underspecified 15:15:11 arseniiv: http://www.cs.indiana.edu/hyplan/dfried/ts.ps talks a bit about the history. As I understand it, the idea is old but appears in slightly different forms and doesn't get the name "trampoline" until about 1995. 15:17:30 * ski supposes cpressey got that link off :) 15:17:59 Indirectly, yes (The paper was listed there but wasn't archived; I had to search to find a copy of it) 15:18:09 * ski nods 15:18:14 too bad the domain expired 15:19:45 arseniiv: the thing I don't like about my current language is [ and ]. it kind of fails the model of computation check because [ and ] have to be balanced for the underlying logic to work. => right. That they stand in the way of typing the program is almost the same thing here 15:20:32 cpressey: thanks! (hmm I need a postscript viewer…) 15:21:02 arseniiv : perhaps `gv' ? 15:21:45 (someone mentioned `zathura' and `okular' too. and i think `evince' can do it ?) 15:22:02 * ski . o O ( `xpdf' ) 15:23:10 I don't think there's a real solution to that problem. apart from manual jumps. I still think there needs to be a linear notation for state machines. 15:23:52 I'm on Ubuntu; `evince` can view .ps directly, and there is also `ps2pdf` 15:25:29 (apparently `ps2pdf` is part of `ghostscript`) 15:27:44 arseniiv, imode: fwiw, the last few concatenative languages I've designed have not had nesting or names, sort-of for the same reasons you mention. I've been calling them "purely concatenative" for that reason. They're not very nice to program in though. 15:28:19 cpressey: mine doesn't have nesting. [ and ] form an infinite loop. 15:28:31 it's a control flow construct more than anything. 15:32:02 -!- GeekDude has quit (Ping timeout: 240 seconds). 15:36:21 * ski usually uses `gv' 15:37:38 * ski idly ponders mapping concatenative to categorical 15:38:28 yay I converted that ps 15:38:53 * ski idly ponders how to convert PDF to PS 15:39:02 now I can uninstall ghostscript mwahaha 15:39:38 any structured control flow requires an external validator, so I guess it fits under a model of programming rather one of computation. unstructured, state-driven or pattern-based control flow are the only candidates for a model of computation. 15:42:13 why are array languages classified under "unstructured", I wonder. 15:42:49 where ? 15:43:08 wikipedia 15:43:27 https://en.wikipedia.org/wiki/Non-structured_programming <-- right-hand sidebar, right under "Non-structured". 15:43:45 I mean I guess APL has a goto statement, doesn't it? 15:43:55 maybe they mean "structured" in the sense of the "structured programming" paradigm ? 15:44:05 more than likely, yeah. 15:44:14 which APL fits under. 15:44:24 hm, looks that way 15:44:30 never liked array languages. they're complicated to implement. 15:44:38 NESL ? 15:44:46 NESL? 15:44:48 yes 15:44:55 whazzat. 15:45:00 15:45:13 "Nested data parallelism" 15:45:50 are you saying this isn't complicated to implement? 15:46:37 or that this fits the definition of an array language. 15:47:16 -!- GeekDude has joined. 15:56:00 hm, perhaps rather that it is complicated to implement 15:56:16 (and mayeb fits the definition, not sure ?) 15:57:52 ok well TIL there's a very close correspondence between CPS and A-Normal Form. (I knew of ANF previously but didn't see the connection to CPS at all.) 15:58:00 probably. I've seen people touting APL and K, etc. as "languages with the smallest implementations", but when your implementations are obfuscated and optimized for size rather than simplicity, it's a moot damn point. 15:58:22 -!- cpressey has quit (Quit: A la prochaine.). 16:01:28 array languages _are_ interesting, though. while they do have control flow operations, a lot of code I see doesn't even have any of them. 16:01:41 it's all operations on data. 16:03:13 they're hella messy, though.. 16:05:50 * ski . o O ( ) 16:10:38 unstructured control flow is hard to compose without sophisticated tooling, though. for example, creating a state machine that enters either a 'true' or 'false' state upon some comparison operation is fine, but composing two machines means identifying where the "anchor" points are for that given machine and attaching two subsequent machines accordingly. 16:14:10 stitching together state machines is also complex because you need names for duplicate/intermediary states as well... 16:20:32 if you used a line numbering approach, it could work. you still need some machinery to stitch together arbitrary code snippets, though. 16:23:07 * ski . o O ( ⌜10 PRINT "VOOB ";␤20 GOTO 10⌝ ) 16:25:11 -!- grumble has quit (Quit: And this chaos, it defies imagination). 16:29:48 -!- grumble has joined. 16:32:01 -!- sebbu has joined. 16:43:43 -!- Sgeo has joined. 16:43:44 -!- Lord_of_Life_ has joined. 16:44:37 -!- Sgeo_ has quit (Ping timeout: 240 seconds). 16:46:37 -!- Lord_of_Life has quit (Ping timeout: 240 seconds). 16:46:38 -!- Lord_of_Life_ has changed nick to Lord_of_Life. 17:10:52 arseniiv: you could probably get away with not validating [ and ] if you mean [ to be "push a symbol to the tape representing the current instruction pointer" and ] to be "jump to the instruction pointer represented on the tape". 17:12:22 though "conditional/unconditional break" would be difficult at that point, because you have to jump to an unspecified address.. 17:15:03 you know what I didn't think of, though. relative addressing. 17:23:17 -!- FreeFull has joined. 17:25:36 imode: `[ ] ? ;` are not so hard to validate as to represent as something typeful 17:26:29 they aren't hard to validate, no. but because they require validation, they don't meet my requirement anymore. 17:27:13 oh, also it occurred to me as I drifted off that your new implementation of `pick` now can take any nonnegative integer index. Even if it’s too big, we’ll just wrap around several times and that’s it 17:27:35 exactly. :) 17:28:12 but because they require validation, they don't meet my requirement anymore. => yeah. I meant from my typing perspective 17:28:18 ahh. 17:30:15 of course that’s why compilers don’t use typing as the only kind of static information, but still I’ll be glad to unify them for cases as simple as this 17:32:15 anyway it looks too philosophical. There is a constructive approach: program a validator which takes word at a time and then try to encode that as types; this is how I ended up with that +− monoid 17:32:34 this whole thing is largely philosophical lol 17:32:35 previously I thought of those types as m → n where m, n : N 17:33:27 but in this form they have no easy law of composing (f: m → n, g: m′ → n′, f ∘ g: ???); and the monoid representation has, it’s just a concatenation 17:34:08 "vertical composition" ? 17:34:17 (or is it "horizontal" ?) 17:34:22 * ski can never recall .. 17:34:49 −+ ∘ −−+ = −(+−)−+ = −−+ 17:35:31 ski: don’t remember either 17:36:12 though maybe the notation is too cryptic, here f: m → n means f pops m values and then pushes n values 17:36:19 (or dequeues and enqueues) 17:36:41 oh 17:37:36 * ski idly wonders whether there's a name for monoids with left- and right- inverses like that 17:38:46 FTR I was going to ask about that too 17:38:59 maybe someone knows 17:39:45 they seem to be in a sense halfway to a free group 17:44:16 yea, first i was going to suggest free group, to you :) 17:44:39 (until i saw "we can’t pop from empty stack so −+ is not e") 17:44:57 * ski idly recalls talking to someone doing random walks in groups 18:04:06 also we can define “multiplication” as can be done on (N, +) and (Z, +), by considering all [anti]endomorphisms. Here, they seem to form a group ≅ Z, so this “multiplication” is not that good; anyway “multiplying” by n would mean replacing each ± by ±^n if n ≥ 0, or by ∓^n if n ≤ 0. This makes not a lot of sense for applications of that monoid here, though 18:08:14 would that be a group action ? 18:08:18 now that I'm thinking of it, I don't think there's any model of computation that _doesn't_ require some validation of the code prior to execution other than perhaps cellular automata. 18:12:15 depends how you set things up 18:12:33 CA source code is binary strings and CA valid source code is binary strings 18:12:45 lisp source code is ascii text and valid lisp source code is well balanced brackets 18:13:08 but you could say, CA source code is ascii text and the only valid CA inputs are ones only using the 0 and 1 symbols 18:13:48 not really the case, here. state machines need to be stored in some format in some place, and that format needs to be some form of a state table. 18:14:15 brainfuck and my language are stored as linear text sequences, but only valid sequences contain matched brackets. 18:15:03 CAs are literally just "here's some data, with some regions linked together in some topological fashion, and data exists in those regions, and depending on what data is next to what, data in certain regions changes". 18:15:56 I disagree completely 18:16:11 feel free to, I guess. 18:17:35 ski: if I’m correct it would be a monoid action as (Z, ⋅) is what acts in this case, not (Z, +): m×(n×a) = (mn)a 18:19:04 (and not (m+n)a) 18:24:29 -!- MDude has quit (Ping timeout: 246 seconds). 18:25:14 mhm 18:49:49 `asm label: nop; nop; addr32; jmp label 18:49:49 0: 90 nop \ 1: 90 nop \ 2: 67 eb fb addr32 jmp 0