00:12:31 -!- zseri has quit (Quit: zseri). 00:16:58 -!- tromp has quit (Remote host closed the connection). 00:23:33 [[There Once was an Esolang Named Fred]] https://esolangs.org/w/index.php?diff=73076&oldid=72979 * Baidicoot * (-250) /* Compilation */ 00:25:25 [[There Once was an Esolang Named Fred]] https://esolangs.org/w/index.php?diff=73077&oldid=73076 * Baidicoot * (+128) /* Compilation */ 00:31:20 -!- Arcorann has joined. 00:31:49 -!- Frater_EST has joined. 00:46:52 [[Talk:EsoScript]] https://esolangs.org/w/index.php?diff=73078&oldid=59663 * DmilkaSTD * (+89) 00:52:05 The other thing to do in TeXnicard is pixel rounding when typesetting, perhaps similar to how DVItype does, although TeXnicard's internal data structures contain more information than a DVI file does, so that can be used instead of having to guess in some cases. 00:53:57 -!- Frater_EST has left. 00:53:57 -!- tromp has joined. 00:58:48 -!- tromp has quit (Ping timeout: 256 seconds). 01:17:24 -!- ArthurSt1ong has quit (Quit: leaving). 01:17:44 -!- ArthurStrong has joined. 01:27:24 -!- MDude has quit (Quit: Going offline, see ya! (www.adiirc.com)). 01:48:33 -!- tromp has joined. 01:53:31 -!- tromp has quit (Ping timeout: 272 seconds). 02:12:03 -!- Phantom_Hoover has joined. 02:21:44 -!- Phantom_Hoover has quit (Ping timeout: 272 seconds). 02:42:45 -!- tromp has joined. 02:47:00 -!- tromp has quit (Ping timeout: 246 seconds). 02:53:12 -!- Frater_EST has joined. 03:10:46 -!- adu has joined. 03:19:23 -!- zzo38 has quit (Disconnected by services). 03:19:28 -!- zzo38 has joined. 03:36:52 -!- tromp has joined. 03:41:49 -!- tromp has quit (Ping timeout: 272 seconds). 04:25:00 -!- Frater_EST has quit (Read error: Connection reset by peer). 04:31:00 -!- tromp has joined. 04:35:08 -!- tromp has quit (Ping timeout: 246 seconds). 04:38:55 -!- Sgeo has quit (Read error: Connection reset by peer). 04:55:38 -!- adu has quit (Ping timeout: 272 seconds). 04:59:02 -!- adu has joined. 05:12:39 -!- Sgeo has joined. 05:25:07 -!- tromp has joined. 05:29:56 -!- tromp has quit (Ping timeout: 265 seconds). 05:30:57 [[Livefish]] https://esolangs.org/w/index.php?diff=73079&oldid=60584 * Voltage2007 * (+29) 05:31:07 -!- adu has quit (Ping timeout: 272 seconds). 05:41:30 -!- LKoen has joined. 05:51:51 -!- adu has joined. 05:54:15 -!- ArthurStrong has quit (Ping timeout: 258 seconds). 06:15:36 -!- rain1 has joined. 06:19:03 -!- tromp has joined. 06:23:17 -!- tromp has quit (Ping timeout: 246 seconds). 06:48:05 -!- iovoid has quit (Quit: iovoid has quit!). 06:49:08 -!- iovoid has joined. 07:04:27 -!- tromp has joined. 07:11:06 [[Deadfish~]] https://esolangs.org/w/index.php?diff=73080&oldid=71168 * Voltage2007 * (+23) 07:18:20 -!- adu has quit (Ping timeout: 246 seconds). 07:20:40 -!- Sgeo has quit (Read error: Connection reset by peer). 07:20:43 -!- adu has joined. 07:25:48 [[Symbols]] https://esolangs.org/w/index.php?diff=73081&oldid=72982 * Voltage2007 * (-208) 07:26:38 -!- adu has quit (Ping timeout: 260 seconds). 07:29:52 -!- imode has quit (Ping timeout: 256 seconds). 07:35:44 -!- adu has joined. 07:43:54 -!- sprocklem has quit (Ping timeout: 260 seconds). 07:44:09 -!- sprocklem has joined. 08:40:49 -!- TheLie has joined. 08:43:32 -!- Sgeo has joined. 08:54:01 -!- TheLie has quit (Remote host closed the connection). 09:18:36 -!- Sgeo has quit (Read error: Connection reset by peer). 09:50:38 -!- Arcorann has quit (Read error: Connection reset by peer). 10:30:21 -!- Arcorann has joined. 10:39:23 -!- arseniiv_ has joined. 10:53:51 -!- Lord_of_Life_ has joined. 10:57:14 -!- Lord_of_Life has quit (Ping timeout: 264 seconds). 10:57:21 -!- Lord_of_Life_ has changed nick to Lord_of_Life. 11:14:06 @tell ais523 I have a concrete instance where adding the "at least one" constraints makes my solver find UNSAT instantly, but with only "at most one" it takes 18 seconds. 11:14:06 Consider it noted. 11:39:46 shachaf: is it a pigeonhole trick where you have n of those disjoint at most one groups on the left side, a bunch of transformations, and then an at least constraint on the right side that actually forces the at least one constraint in a way that your solver can't quickly notice? 11:40:51 you know, like put 8 black queens on a chessboard such that no pair of queens defend each other 11:41:44 What do you mean? 11:42:36 shachaf: the at most one constraint is that each row has at most one queen. that much is easy from the description, because two queens on a row would attack each other. the least one constraint would be that you put at least one queen in every row, which is not so obvious. 11:43:17 if you know the at least one constraint, then you can find solutions more quickly, because the solver won't try to explore solutions where all squares of a row are excluded. 11:43:26 s/solutions/partial solutions/ 11:45:04 b_jonas: Oh, the context here is sudoku. 11:45:19 Where I have constraints like "at least one of each digit on this row" and "at most one". 11:48:55 shachaf: yes, that's similar 11:49:21 it's the same pigeonhole sort of constraint, when you want exactly one of each row, but you can prove that 11:50:02 though of course these problems are special enough that you probably don't want to solve them by general SAT solvers, but rather by specialized solutions or at least finite domain solvers 11:50:18 for sudoku you can probably find a lot of different solvers on the net 11:50:21 It's not clear. 11:50:27 What's the hardest 9x9 sudoku? 11:50:35 I imagine it's still pretty easy for a CDCL solver. 11:51:03 I don't know, but I think that's a well known question and there are candidates on the internet, though it might be hard to prove which one is "the hardest" 11:55:25 I suspect they're kind of sensitive to specific choices you make so it's hard to call one the hardest. 11:55:53 But SAT solvers do lots of restarts anyway so they'll probably find a nice variable ordering and solvulate it all easy-peasy-like. 12:11:03 https://www.telegraph.co.uk/news/science/science-news/9359579/Worlds-hardest-sudoku-can-you-crack-it.html <-- from searching "hardest sudoku" 12:18:34 shachaf: yeah, I'm probably just biased against SAT solvers, I don't trust them much 12:19:35 though perhaps I should be biased against them less than how much I'm biased against machine learning, GPU programming, multi-threading, and just-in-time compilation 12:20:03 -!- ais523 has joined. 12:20:16 that's a weird set of biases 12:20:38 I'm biased against machine learning, in favour of GPU programming and multithreading, and mostly neutral on JIT compilation 12:20:50 although I think people don't AOT compile nearly as much as they should 12:21:00 When is JIT compilation good? 12:21:10 b_jonas: oh, these are almost all what I consider magic at least partially. Though GPU programming is just a thing like microcontrolles which I simply don’t know now and don’t intend to dive into, and not magic 12:21:14 -!- arseniiv_ has changed nick to arseniiv. 12:21:22 shachaf: I can see an argument for JIT compiling specialised code 12:21:31 e.g. you see a function is being called with argument 2 a lot 12:21:31 ais523: no it's not. all four are technologies that are (or were) overhyped, used for marketing and/or actual solutions for problems where they hurt more than they help. I should add blockchains to that list, but that's too obvious. 12:21:37 so you recompile it inlining 2 as a parameter 12:21:40 and multithreading should be a very different kind of magic if at all 12:21:47 ais523: SAT solvers don't match that set, which is why I should be less biased against them 12:21:56 I feel like understanding your code is so valuable, though. 12:22:09 shachaf: even AOT compilation doesn't help much with that 12:22:29 But I can see some uses for JIT, I suppose. But not just general-purpose code in your regular programming language. 12:22:30 especially as most people don't understand what's going on with machine code 12:23:11 Are things like regular expression JITs worthwhile? 12:23:16 there's a side advantage of JIT which isn't technically confined to JITs but is normally only seen there: you can distribute platform-independent binaries 12:23:25 as a result, they're also all technologies that I'd be reluctant to use, and where I sell myself to employers as being an expert of more traditional programming tech as opposed to those overhyped stuff that everyone tries to learn these days, giving a ... uh, insert buzzwords here, the point is that I can write straightforward code to translate domain-specific knowledget such that for many problems I 12:23:31 get a better solution than those hyped tech that your other candidates would try to sell you 12:23:36 regular expression JITting is definitely useful in the case where the regular expression is only known at runtime 12:23:41 For runtime user-entered regular expressions, I mean. If they're known at compiletime then there's no reason to JIT, of course, you just want a specialized AOT compiler. 12:23:42 regexes have to be compiled to get good performance 12:24:06 as for known-at-compile-time, it probably depends on how large the resulting state machine is 12:24:33 b_jonas: I wish GPU programming was more like CPU programming, where you could target a GPU rather than bizarro languages and compilers provided by vendors. 12:24:42 if the source is much smaller than the binary, and the compiler is already on the target machine 12:24:43 it may make sense to just ship the code 12:24:49 but I'm familiar enough with multithreading that I can write code to use it sanely and reliably when it's really needed, and I at least try to recognize when each of these technology could be useful in which case I can advise them to ask someone who is good at those techniques to use them 12:24:52 shachaf: oh yes, GPU toolchains are all-around horrible 12:25:05 Vulkan is an improvement on what came before but it may not be enough of an improvement 12:25:36 b_jonas: one thing I dislike about multithreading is languages that require multithreaded code to do things that shouldn't need it 12:25:56 like, oop languages? 12:25:57 Isn't Vulkan just a huge complicated mess that is also not relevant to almost any platforms? 12:25:58 e.g. because they have no way to block on two unrelated things simultaneously 12:26:16 SAT solvers don't match that set => hmm, yet?.. 12:26:19 shachaf: it has fewer levels of abstraction than, say, DirectX or OpenGL 12:26:37 which automatically means that working around its messes is more viable than working around its competitors' messes 12:26:41 I heard they use a SAT solver to (type)check F* code 12:27:30 well, Damas-Milner typechecking is basically a sort of SAT solver, but the algorithm is very simple and (with a Hindley-Milner type system) known to terminate 12:28:24 btw, my view on multithreading is quite different from most people's; I believe that a program that can't be trivially made multithreaded is inherently inefficient, and thus only suitable for tasks where an inherently inefficient program is acceptable 12:28:52 e.g. ls has no reason to be CPU-efficient because it spends almost all its time blocking on I/O anyway 12:28:55 arseniiv: I don't think they ever will. just look at TAOCP. anything that TAOCP covers won't go into that set. SAT solving is in TAOCP, JIT compilation is minimally mentioned in vol 1 about interpreters, multithreading is specifically excluded from the series' topic in the introduction (though I think a few exercises mention locking), GPU programming is I think excluded though the intro doesn't 12:29:01 specifically mention that because GPUs didn't exist when vol 1 third edition was printed, 12:29:13 and ... what were the others? blockchains and machine learning aren't covered either. 12:30:34 ais523: oh! that reminds me one question. How much GHC’s type system adds to HM to typecheck Haskell with all those extensions (maybe without UndecidableInstances and friends) 12:30:42 do you know? 12:31:23 ais523: I'm not sure what it means to say that a program is inherently inefficient there. 12:31:23 though of course we have the ed. Iványi book, which specifically tries to cover topics that TAOCP doesn't cover, because TAOCP is so good that there's no point for them writing a book that says the same thing again 12:31:35 -!- tromp has quit (Remote host closed the connection). 12:32:12 so ed. Iványi has chapters for multithreaded programming, SQL databases (that one I'm neutral about... it's more complicated and refined than that but let's not get into that now), etc 12:32:13 Some tasks are hard to parallelize, and certainly hard to parallelize trivially. If there's no better option for doing that task, in what sense is it inefficient? 12:34:53 arseniiv: I don't know about most extensions, but Rank2Types is type-checkable in theory but GHC can't do it, RankNTypes has undecidable typechecking 12:35:00 this is the reason that Rank2Types is separate 12:35:28 shachaf: I mean parallelizing to the extent that the underlying problem can be parallelized 12:35:34 ais523: neat detail, thanks! 12:35:54 Do you mean inference rather than checking? 12:36:07 for example, even if the task is entirely sequential (which is very rare), a program to solve it should at least be trivially modifiable to perform multiple unrelated instances of that task in parallel 12:36:12 shachaf: yes 12:36:24 or, well, checking of a program where the types might not all have been explicitly stated by the programmer, which is the same thing 12:36:30 shachaf: inference too, yeah, what good is checking without inference 12:36:47 arseniiv: wow, I didn't know that about rank 2 types 12:36:54 I think the claim that Rank2Types is inferrable in theory isn't very useful. 12:37:05 I didn’t too 12:37:20 shachaf: it was relevant in my PhD thesis, that's why I know it 12:37:24 (was that a wrong one-character completion) 12:37:48 oh yes, sorry 12:37:54 I mean, I'm not actually sure what the exact claim is, but my understanding is that you can do "whole program inference": If there are rank-2 types that can be assigned to the whole program, you can find them, but you can't do it locally. 12:38:08 ais523: Oh, then you must know much better than I do. 12:38:17 Is what I'm saying right? 12:38:46 Also, does the same argument about rank-2 types work for existentials? 12:38:55 I think it must since you can encode existentials with rank-2 types. 12:39:14 there's more than one definition of existentials, and I forget which one Haskell uses 12:39:25 but if it can be encoded using rank-2 types, it must be inferrable 12:40:19 Say you have an existentially typed thing like xs = [(1, intToString), (False, boolToString)] 12:40:25 I glance to the GHC’s inference algorithm because I know if I would make a usable language, I’ll need all the types, and as subtyping is almost certainly no-go, something à la GHC or Rust would need to be used 12:40:41 You can assign that the existential types [exists a. (a, a -> String)], or (exists a b. (a, b)], or [exists a. a], etc. 12:40:46 but I’m afraid to look into the concrete implementation (in GHC) 12:40:55 And there's no way to look at that list without seeing how it's being used and pick one of those. 12:42:34 my main interest on existentials was related to type tags rather than to base types or function types, which is a bit different from what GHC is doing 12:43:07 -!- tromp has joined. 12:43:18 by "type tags" I mean things like the ! from linear logic 12:43:44 And there's no way to look at that list without seeing how it's being used and pick one of those. => hm would that also be true for any advanced typing constructs? We use types to constrain operations which can be performed, after all, and when very abstract typing constructs emerge, they aren’t usually coupled to the data representation, so we shouldn’t be able to decide(?..) 12:43:46 But you can encode this rank-2. Ignoring the list, I guess: foo f = f (1, intToString) 12:43:47 the field I was working in was related to making those things more precise, my thesis was about why the existing attempts at doing it didn't work 12:44:40 You can give foo the type forall r. ((Int, Int -> String) -> r) -> r, or forall r. (forall a. (a, a -> String) -> r) -> r, or forall r. (forall a. a -> r) -> r, etc. 12:44:40 right, that foo version of things looks familiar to me 12:45:10 There's no clear most general type is I guess all I was trying to say. 12:45:29 ah right 12:46:01 I think my experience is that "most general type" equvialents might exist, but aren't actually types, they're some sort of complicated set of constraints 12:46:32 there are some examples in my thesis where, due to constraint loops that are solvable, the normal subtyping relationships break down 12:46:32 surely there could be something to reason about possible types in these cases? Which would complicate algorithms and maybe a type system itself, but maybe wouldn’t look too inhumane? 12:47:01 I think my experience is that "most general type" equvialents might exist, but aren't actually types, they're some sort of complicated set of constraints => something like that! 12:47:46 it’s nontrivial that typechecking can be that nontrivial 12:47:54 s/be/become 12:49:51 Fact 6.2.13. There are terms which type in SCC and Bounded ICA, and which have all types for that term differ only in the bounds, but for which there is a covariant position in those types such that the set of bounds that can appear in that position in some type for that term is not the same as the set of all nonnegative integers. 12:50:18 this is basically an argument against the existence of most-general-types 12:50:33 because you don't expect to see extra restrictions in covariant positions 12:50:48 (the fact in my thesis is a counterexample to a conjecture by someone else) 12:51:18 I gave an explicit such term, λ q.(λ g.g(λ x.g(qx)))(λ b.(λ k.((k(λ u.u))(λ l.((kb)(λ m.(l(m( skip ))))))))(λ v.λ w.wv)) 12:51:30 which is kind-of surprising that it's the simplest known term with that property 12:51:36 ("skip" is an arbitrary constant) 12:51:46 (which has base type) 12:53:08 :t \q -> (\g -> g(\x -> g (q x)))(\b -> (\k ->(\u -> u))(\l -> ((k b)(\m -> (l(m(0))))))))(\v -> \w -> w v)) 12:53:10 error: parse error on input ‘)’ 12:53:39 :t \q -> (\g -> g(\x -> g (q x)))(\b -> (\k -> ((k(\u -> u))(\l -> ((k b)(\m -> (l(m(0))))))))(\v -> \w -> w v)) 12:53:40 Num t => (t -> t -> t) -> t 12:53:48 @pl \q -> (\g -> g(\x -> g (q x)))(\b -> (\k -> ((k(\u -> u))(\l -> ((k b)(\m -> (l(m(0))))))))(\v -> \w -> w v)) 12:53:48 flip (ap id . flip (.)) (flip (ap ($ id) . flip flip (. ($ 0)) . ((.) .) . flip id) (flip id)) 12:54:01 was just curious 12:54:23 GHC doesn't see anything interesting about the term, but i guess that's because it isn't doing anything linear-logic-like 12:54:58 -!- Lord_of_Life has quit (Read error: Connection reset by peer). 12:55:54 -!- Lord_of_Life has joined. 12:56:42 :t flip flip 12:56:43 b -> (a -> b -> c) -> a -> c 12:59:26 -!- rain1 has quit (Quit: leaving). 12:59:44 hmm, what's the minimum set of constants required to implement @pl? 12:59:54 and is it TC? 13:00:39 :t (.) 13:00:40 (b -> c) -> (a -> b) -> a -> c 13:00:42 :t ap 13:00:44 Monad m => m (a -> b) -> m a -> m b 13:00:58 :t ($) 13:00:59 (a -> b) -> a -> b 13:01:06 `? duloc 13:01:08 duloc? ¯\(°​_o)/¯ 13:01:11 ah right, ap is the monad version of ($) 13:02:42 s/k/y, I suppose? 13:03:07 Plus things like constructors and eliminators to handle data types. 13:03:55 @pl (\x -> x x)(\x -> x x) 13:03:58 ap id id (ap id id) 13:03:58 optimization suspended, use @pl-resume to continue. 13:04:22 huh, I wonder if its optimizer went into an infinite loop 13:04:27 Yes. 13:04:45 I don't think the double mockingbird even types in Haskell, does it? 13:04:48 :t (\x -> x x) 13:04:49 error: 13:04:49 • Occurs check: cannot construct the infinite type: t ~ t -> t1 13:04:49 • In the first argument of ‘x’, namely ‘x’ 13:04:52 yep 13:04:53 pl isn't typed 13:05:08 :t ap id id 13:05:10 error: 13:05:10 • Occurs check: cannot construct the infinite type: a ~ a -> b 13:05:10 Expected type: (a -> b) -> a 13:05:40 perhaps s and k alone are enough in that case 13:05:49 you can write y and/or z in terms of them 13:06:32 For @pl's purposes, sure, but if you give it type-checking Haskell it will produce a type-checked pl version. 13:07:01 @pl foo x = if x == 0 then 0 else f (x - 1) 13:07:01 foo = ap (flip if' 0 . (0 ==)) (f . subtract 1) 13:07:12 Uh. 13:07:16 @pl foo x = if x == 0 then 0 else foo (x - 1) 13:07:16 foo = fix (ap (flip if' 0 . (0 ==)) . (. subtract 1)) 13:07:40 ah right, so now we have a fixed-point combinator 13:08:28 and y is enough to implement that sort of fixed-point combinator in a lazy language 13:08:58 > fix (\x -> 2 - x) 13:09:01 *Exception: <> 13:09:16 Haskell's fixed-point combinator kind-of sucks, though 13:09:25 Why? 13:09:38 \x -> 2 - x has a fixed point, but Haskell's fix couldn't find it 13:09:57 It did find it. 13:10:12 `! brachylog ~-₂?w 13:10:14 ​ \ false. 13:10:26 oh right, ~- is + 13:10:27 I guess you mean you want a specific fixed point, but it always finds the least fixed point. 13:10:57 Do you know the answer to this question: Why do type theory types require recursion to be strictly positive, rather than just positive? 13:11:02 what determines whether a fixed point is higher or lower than bottom? 13:11:20 I'm not sure what you mean by positive recursion 13:11:35 I mean types that are recursive and refer to themselves in positive position. 13:11:52 There's a partial order of definedness, with bottom at the bottom (hence the name). 13:12:22 `! brachylog 2-↙..w 13:12:23 1 \ true. 13:12:26 there we go 13:12:53 does haskell even have to have a finite basis of combinators for @pl, given that you can't type some of the untyped combinators? 13:13:07 maybe you need an infinite set of looping combinators 13:13:26 shachaf: oh, I see, so to get a result from fix, you need to write f in such a way that f bottom != bottom 13:13:49 anyway, I'll afk very soon, be back in 2.5 days probably 13:14:26 Right. 13:15:28 and that's only possible if f takes multiple arguments, so that the argument that isn't bottom can be used to determine whether or not to look at the first argument 13:15:40 or if f ignores its first argument entirely, I guess 13:15:50 > fix (const 8) 13:15:51 8 13:16:18 > fix (\x -> x == true) 13:16:20 error: 13:16:20 • Variable not in scope: true :: Bool 13:16:20 • Perhaps you meant data constructor ‘True’ (imported from Data.Bool) 13:16:25 > fix (\x -> x == True) 13:16:28 *Exception: <> 13:16:32 > fix (\x -> x == False) 13:16:35 *Exception: <> 13:16:45 Well, it's possible in other cases. 13:16:47 > fix (1:) 13:16:48 [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1... 13:16:49 -!- b_jonas has quit (Remote host closed the connection). 13:17:18 ah right, the bottom is at the tail of the infinite list 13:17:32 > fix fix 13:17:34 error: 13:17:34 • Occurs check: cannot construct the infinite type: a ~ a -> a 13:17:34 Expected type: a -> a 13:17:39 `! brachylog 2-↙..w 1 \ true. => how does it do that? 13:18:19 arseniiv: . is a variable in Brachylog, and concatenation means "assert equal" so 2-↙.. means "assert that 2 minus . equals ." 13:18:38 when I then use w to output the value of . 13:18:43 it uses a constraint solver to figure out what value it has to have 13:19:12 Hmm, I like constraint solvers. 13:19:19 I don't really know how Prolog works. 13:19:35 the ↙ is just used to give - an argument because Brachylog operators are unary by default unless you explicitly make them binary 13:19:43 Is it just defined to use backtracking? 13:19:53 ais523: shachaf: ah, so that’s why it called -log! 13:20:04 shachaf: in original/standard Prolog, yes, but modern Prolog has deviated from that somewhat 13:20:12 there are now two levels of solving ability 13:20:22 the outer one uses backtracking, the inner one uses a constraint solver 13:20:44 although, even in traditional Prolog, there was some limited constraint-solving ability that didn't involve backtracking 13:20:55 I'm not sure if we have a Prolog impl in HackEso to demosntrate, though 13:21:04 Prolog is also pretty related to index notation, I think. 13:21:13 Man, index notation is so good. 13:21:35 I seem to remember something like this discussed here 13:21:36 but, e.g., concat([1,2,3],B,A), B=[4,5,6] will give you the output A=[1,2,3,4,5,6] with no backtracking involved 13:21:47 because it's one of the case that traditional prolog's constraint-solver handles 13:22:30 Oh, because the naive thing would be to assign values to B and A at the first statement, and then backtrack when you see that your guess for B was wrong? 13:22:39 can one mix Prolog and session types? 13:22:41 err, it's called "append", not "concat" 13:22:48 shachaf: right 13:23:13 if you write append(B,[1,2,3],A),B=[4,5,6]. then the constraint solver can't handle it 13:23:31 so the backtracker is engaged, it finds the solution A=[4,5,6,1,2,3], then goes into an infinite loop looking for a second solution 13:23:47 trying longer and longer guesses for B 13:23:59 Why can't it handle it? 13:24:14 it has a special case for the end of the list but not for the start of a list 13:24:31 Oh, it's just a super-duper-special-case-a-majig. 13:24:41 ?- append([1,2,3],B,A). 13:24:41 Maybe you meant: v @ ? . 13:24:42 A = [1, 2, 3|B]. 13:24:52 there, we can see the special case being written in the output 13:24:55 [[Symbols]] M https://esolangs.org/w/index.php?diff=73082&oldid=73081 * PythonshellDebugwindow * (+35) /* Javascript */ fix 13:24:56 Oh, hmm, I see now, maybe that makes sense. 13:25:00 Because it's a linked list structure. 13:25:12 This is the origin of difference lists, I remember now. 13:25:29 `? cut elimination 13:25:30 right, the special case is actually for an unknown argument to a functor 13:25:30 The cut-elimination theorem states that any Prolog program written using the cut operator ! can be rewritten without using that operator. 13:25:31 such as "cons" 13:25:52 which means that for a list, the constraint solver handles unknown elements and unknown tails, but no other cases 13:26:52 [[Phile]] M https://esolangs.org/w/index.php?diff=73083&oldid=71332 * PythonshellDebugwindow * (+0) /* Syntax */ 13:27:14 [[Phile]] M https://esolangs.org/w/index.php?diff=73084&oldid=73083 * PythonshellDebugwindow * (+1) /* Syntax */ 13:27:26 I think the explicit evaluation order is one of the least elegant things about Prolog 13:27:33 eh, it should have been using some length information in the `append` definition. But then it would need to compute lengths of known arguments and it may end up slow without caching which list had been shown to have which length 13:28:03 arseniiv: the normal implementation "append" (and the implementation in the standard library) works by looping over all possible lengths 13:29:00 ?- length(B, 3), append(B, [1,2,3], A), B=[4,5,6]. 13:29:00 Maybe you meant: v @ ? . 13:29:02 B = [4, 5, 6], 13:29:03 A = [4, 5, 6, 1, 2, 3]. 13:29:18 [[HGFTSNOA]] M https://esolangs.org/w/index.php?diff=73085&oldid=71724 * PythonshellDebugwindow * (+29) /* Function arguments/return */ 13:29:20 when we set the length of B explicitly, there's no backtracking involved because "append" has only one case to check 13:29:37 [[User:PythonshellDebugwindow]] M https://esolangs.org/w/index.php?diff=73086&oldid=72868 * PythonshellDebugwindow * (-22) /* Random name generator */ 13:29:54 the reason for the infinite loop without the "length" is just that nothing is known about B until after "append" runs, and Prolog has a set evaluation order so it can't take into account any knowledge about what B is about to become 13:30:40 What would Prolog be like if it was more like a SAT solver? 13:30:42 [[Function x(y)]] M https://esolangs.org/w/index.php?diff=73087&oldid=70797 * PythonshellDebugwindow * (-64) /* Syntax */ rm link 13:31:25 shachaf: I have plans for something like this already 13:31:46 the main idea is to start off by evaluating anything that can be evaluated efficiently, then anything that can be evaluated in known finite time 13:31:57 and only resort to an infinite brute-force as the last resort 13:32:15 (even then, breadth-first is better than depth-first so that you search every point in the infinite tree) 13:32:53 Or iterative deepening. 13:33:07 iterative deepening and breadth-first are equivalent 13:33:26 the only difference is that iterative deepening re-checks some cases it's already checked, in order to save the memory needed to remember that it's already checked them 13:33:35 Yes. 13:33:56 Well, hmm, not necessarily? 13:34:27 If you're doing something like alpha-beta pruning, can iterative deepening be better than breadth-first search? 13:34:54 hmm, interesting; you'd need to use the alpha and beta values from lower levels to prune the higher levels in order to get any gain 13:35:01 That reminds me, is there a really nice way that I should understand PVS/Scout? 13:35:23 Well, you might get a better ordering on the earlier levels that you can use for the later levels. 13:35:36 Since alpha-beta is sensitive to the order of the nodes. 13:36:11 I'm actually not really sure what breadth-first alpha-beta would mean. 13:36:21 shachaf: the best way to understand PVS is that it's like alpha-beta search, but instead of setting alpha and beta to worst case values, you set them to best case values so that the search can go faster 13:36:49 you then have to redo the search if your values turned out to be too optimisitic, but luckily it's always possible to discover this during your iteration 13:37:15 Hmm, that's interesting, do you have a concrete case in mind? 13:37:27 Otherwise I'll try to think of one, maybe after sleeping. 13:37:35 it's used in chess programs a lot 13:37:53 No, I mean a concrete tree that makes this best-case thing really clear. 13:38:10 I should probably work it out myself given your hint. 13:38:24 concrete search trees are a pain to come up with 13:38:34 Yes, I just mean something small. Let me see. 13:38:58 [[And then]] M https://esolangs.org/w/index.php?diff=73088&oldid=69363 * PythonshellDebugwindow * (+7) 13:39:09 For regular alpha-beta, the insight is that if you have something like min(5, max(8, ...), ...), you don't need to compute the rest of the arguments to max. 13:39:30 yes 13:39:45 Do I need three levels of tree for PVS? 13:40:17 no, in PVS you see min(5, …) and just assume that nothing in the … will be lower than a 5 13:40:47 you need to verify that, of course, but verifying it is faster than doing a full alpha-beta search because you don't track the values of alpha and beta 13:41:02 if the verification fails, then you need to do a proper search because you have no idea what went wrong 13:41:48 here's a different intution: a true depth-first search is "give me the exact evaluation of this position" 13:42:05 an alpha-beta search is "give me the exact evaluation of this position if it's in the range 5 to 8, otherwise just say 'too low' or 'too high'" 13:42:27 and a PVS search is "let me know if this position evaluates to ≥ 5 or < 5" 13:43:03 Which position? E.g. the second argument to min? 13:43:51 in a PVS you do a proper alpha-beta search from the suspected-best candidate move in each position, then just try to prove that none of the others are better than it 13:44:33 so, e.g. from the starting position in chess, if you think e4 is the best move, you get an evaluation for e4 using alpha-beta, then evaluate the position after d4 using a binary evaluation ("worse than e4" / "better than e4") 13:44:48 if it's worse, you're good; if it's better, you need to do a full search again in order to find out the exact evaluation 13:44:49 Is the assumption here that alpha-beta will *probably* say ">= 5"? 13:44:53 yes 13:45:15 Hmm, this is sounding pretty helpful. 13:45:32 so PVS is really dependent on getting the moves in the right order; getting them in the wrong order doesn't affect correctness but the performance tanks 13:45:50 What's the reason the binary evaluation can be faster than full evaluation? 13:47:19 with alpha/beta there's an overlap in the cutoff ranges for low cutoffs and high cutoffs, a value between alpha and beta can't be cut off whether it's high or low 13:47:20 And what's the reason it can be expressed using a regular alpha-beta search, just with different values for alpha and beta? 13:47:46 I suspect I can work out the answers to both of those questions with what you've given me. 13:47:48 to get a PVS trial search out of an alpha-beta search algorithm you set alpha=beta 13:47:57 so there's less (i.e. no) overlap in the cutoff ranges 13:48:06 that makes cutoffs more frequent so the search goes faster 13:48:48 (err, alpha and beta might have to differ by 1 depending on what your convention for range endpoints is, the idea is that every value is either a low cutoff when looking for low cutoffs, or else a high cutoff when looking for high cutoffs) 13:49:06 (whereas alpha-beta has "in between" values) 13:50:59 This is seeming really nice but unfortunately I really need to go to sleep. 13:51:33 I'll think about this again after I wake up. 13:51:39 Thanks! 13:51:50 shachaf: good night! 13:54:54 ais523: please ping me any time if you would make something with these “sensible ordered Prolog” ideas you wrote, that looks very interesting to test 14:25:47 [[Forbin]] M https://esolangs.org/w/index.php?diff=73089&oldid=73070 * PythonshellDebugwindow * (+466) /* Range loops */ 14:27:01 [[Forbin]] M https://esolangs.org/w/index.php?diff=73090&oldid=73089 * PythonshellDebugwindow * (+40) /* Returns */ 14:48:34 [[Forbin]] M https://esolangs.org/w/index.php?diff=73091&oldid=73090 * PythonshellDebugwindow * (+2242) /* The NOT operator */ 14:51:03 [[Forbin]] M https://esolangs.org/w/index.php?diff=73092&oldid=73091 * PythonshellDebugwindow * (+283) /* I/O */ 14:54:03 [[Forbin]] M https://esolangs.org/w/index.php?diff=73093&oldid=73092 * PythonshellDebugwindow * (+250) 15:13:09 [[Livefish]] M https://esolangs.org/w/index.php?diff=73094&oldid=73079 * PythonshellDebugwindow * (+0) /* C */ 15:14:06 [[Livefish]] M https://esolangs.org/w/index.php?diff=73095&oldid=73094 * PythonshellDebugwindow * (+46) /* Computational Properties */ 15:14:41 [[Deadfish~]] M https://esolangs.org/w/index.php?diff=73096&oldid=73080 * PythonshellDebugwindow * (-11) eso styles ns visible 15:15:13 [[Livefish]] M https://esolangs.org/w/index.php?diff=73097&oldid=73095 * PythonshellDebugwindow * (-21) eso styles ns visible 15:17:27 [[COD]] M https://esolangs.org/w/index.php?diff=73098&oldid=72697 * PythonshellDebugwindow * (+30) /* Raise an error (takes 3 inputs first) */ cat 15:18:34 [[Forbin]] M https://esolangs.org/w/index.php?diff=73099&oldid=73093 * PythonshellDebugwindow * (+92) /* Computational class */ 15:42:29 -!- adu has quit (Quit: Lost terminal). 15:56:47 -!- rain1 has joined. 16:01:31 -!- Arcorann has quit (Read error: Connection reset by peer). 16:06:14 [[User:PythonshellDebugwindow]] M https://esolangs.org/w/index.php?diff=73100&oldid=73086 * PythonshellDebugwindow * (+83) /* Languages */ 16:06:44 [[Language list]] M https://esolangs.org/w/index.php?diff=73101&oldid=72839 * PythonshellDebugwindow * (+13) /* F */ + [[Forbin]] 16:07:17 [[Forbin]] M https://esolangs.org/w/index.php?diff=73102&oldid=73099 * PythonshellDebugwindow * (-12) 16:07:44 -!- imode has joined. 16:11:23 `learn The password of the month is Mayfly. 16:11:26 Relearned 'password': The password of the month is Mayfly. 16:16:52 [[User:Emerald]] https://esolangs.org/w/index.php?diff=73103&oldid=72763 * Emerald * (+12) Rename 16:19:09 -!- TheLie has joined. 16:20:14 -!- sprocklem has quit (Ping timeout: 240 seconds). 16:21:25 -!- sprocklem has joined. 16:24:23 -!- tromp has quit (Remote host closed the connection). 16:37:14 -!- ArthurStrong has joined. 16:58:17 -!- user24 has joined. 17:01:31 -!- tromp has joined. 17:13:08 -!- TheLie has quit (Remote host closed the connection). 18:05:40 -!- tromp has quit (Remote host closed the connection). 18:20:15 -!- TheLie has joined. 18:21:31 -!- tromp has joined. 18:33:10 -!- craigo has joined. 18:33:20 -!- tromp has quit (Remote host closed the connection). 18:41:51 -!- tromp has joined. 18:41:58 [[3switchBF]] https://esolangs.org/w/index.php?diff=73104&oldid=53806 * Voltage2007 * (+878) 18:43:16 [[Deadfish "self-interpreter"]] https://esolangs.org/w/index.php?diff=73105&oldid=63098 * Voltage2007 * (+141) 18:43:54 last minute change there 18:48:14 myname: I thought of that one 3 weeks ago :P 18:48:33 but it only makes sense when it's short-lived. 18:59:21 -!- Sgeo has joined. 19:06:36 -!- user24 has quit (Remote host closed the connection). 19:19:38 int-e: oh, I was trying to figure out what the pun was 19:19:51 you're probably a day early, though, May has 31 days 19:22:18 and mayflies only last a few days I think 19:22:24 err, a few hours 20:07:41 Time flies like an arrow, fruit flies like bananas. 20:24:24 -!- FreeFull has quit (Quit: rebooting). 20:24:43 -!- sprocklem has quit (Ping timeout: 260 seconds). 20:25:37 -!- sprocklem has joined. 20:27:47 -!- ArthurSt1ong has joined. 20:31:25 -!- ArthurStrong has quit (Ping timeout: 264 seconds). 20:32:03 -!- FreeFull has joined. 20:37:45 -!- MDude has joined. 20:43:18 -!- rain1 has quit (Quit: leaving). 21:14:17 -!- tromp has quit (Remote host closed the connection). 21:25:26 -!- tromp has joined. 21:39:33 finding cycles in a bitmap and printing it as ASCII art: https://hatebin.com/ksmezwqait 21:40:07 now I need to make so that cycles would know which one is inside which 21:49:26 [[L]] https://esolangs.org/w/index.php?diff=73106&oldid=72160 * Voltage2007 * (-313) js was already implemented so I replaced it with befunge 22:54:32 -!- Lord_of_Life_ has joined. 22:57:40 -!- Lord_of_Life has quit (Ping timeout: 256 seconds). 22:57:41 -!- Lord_of_Life_ has changed nick to Lord_of_Life. 23:16:14 -!- TheLie has quit (Remote host closed the connection). 23:20:07 -!- LKoen has quit (Quit: Leaving). 23:25:00 -!- ArthurSt1ong has quit (Quit: leaving). 23:25:18 -!- ArthurStrong has joined. 23:46:57 -!- GFloyd has joined. 23:48:05 -!- GFloyd has left ("we can't breathe on freenode").