00:00:08 !persist regex load 00:00:08 Cannot register regex 00:00:11 Huh? 00:00:17 oops wrong channel 00:00:17 sorry 00:00:26 meant to do it in #toboge 00:04:33 is toboge an egobot clone or something 00:05:55 He's trying to outcompete with EgoBot :( 00:05:58 Damn evolution! 00:06:26 GregorR: you have gone too long without adding new languages :/ 00:07:06 -!- immibis_ has joined. 00:08:34 oerjan: People haven't been very persistent in telling me to add them :P 00:09:04 add some non-esoteric languages 00:09:25 I won't add any languages with file I/O, and most non-esolangs have that. 00:09:50 do scheme without filezors 00:10:49 there are only a few file operators that you have to remove 00:11:51 * ehird` thinks blahbot is supreme! 00:11:56 yes i am! 00:12:01 i do all sorts of things! 00:12:08 but not everything quite yet. 00:12:11 just wrap the code in something like: (define (fuck-you . ignored) (write-to-channel "fuck you, hax0r")) (let ((with-output-file fuck-you) et cetra) ...) 00:12:26 Interesting. 00:12:55 * ihope continues pondering stuff 00:13:30 \x is a function from a to a binder of x to a... 00:13:46 http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-9.html#%_sec_6.6.1 00:13:52 those are the relevent functions 00:13:54 x is an a given a binding of x to a. 00:14:02 -!- immibis has quit (Nick collision from services.). 00:14:05 -!- immibis_ has quit (Nick collision from services.). 00:14:09 -!- immibis has joined. 00:14:39 \x :: all a. a -> Bind \x\ a 00:15:18 if anyone could tell me how to fix my connection, that would be useful. 00:15:24 x :: [x : a] => a 00:15:31 -!- toBogE has quit (Nick collision from services.). 00:15:35 immibis: duct tape. 00:16:28 call-with-input-file, call-with-output-file, with-input-from-file, with-output-to-file, open-input-file, open-output-file, load, transcript-on 00:16:33 that's all you need to overload 00:16:56 Er, \x\ : a, not x : a. 00:17:55 \x :: all a. a -> Bind \x\ a; x :: [x : a] => a; (::) :: a -> * -> Dec; 00:17:57 bsmntbombdood: _provided_ he knows his scheme implementation has no i/o extensions. 00:18:00 Er. 00:18:55 oerjan: so he just needs to read his implementation's docs 00:19:43 ihope: seems kinda authmathy 00:19:45 er 00:19:47 automathy 00:19:55 edwardk: automathy? 00:19:55 or he could write his own scheme and be absolutely sure 00:20:09 bsmntbombdood: he could write himself a Scheme in 48 hours! 00:20:17 like automath, the grand-daddy of modern functional languages 00:20:32 on the non-lisp side of the family tree ;) 00:20:56 where we got this strange notion of type systems from, etc =) 00:20:59 and not miss anything that puts surprising I/O access into an "obviously" safe place. 00:22:04 \x :: all (\a. a -> Bind \x\ a); x :: all (\a. [\x\ : a] => a); (::) :: all (\a. a -> * -> Dec); all :: all (\a. (a -> *) -> *); * :: *; \x\ :: Id; Bind :: Id -> *-> *; Id :: *; (:) :: Id -> * -> Req; (=>) :: List Req -> * -> *; List :: * -> *; Req :: * 00:23:41 Oh, and (.) :: all (\a. all (\i. (a -> Bind i a) -> [i : a] => b -> a -> b)) 00:23:52 And Dec :: * 00:24:48 And then there's let... 00:25:45 what are you doing? 00:26:06 Trying to invent a language. 00:26:14 Of the programming kind. 00:28:43 ihope: sorry for the late reply, but how can duct tape ensure a wireless connection stays connected? 00:28:56 immibis: duct tape'll connect anything! 00:29:10 even a wireless connection? 00:29:23 even a wireless, ducttapeless connection? 00:29:43 Sure. 00:29:54 See if you can find wireless duct tape. 00:31:27 ihope: see if you can find wired duct tape 00:37:38 -!- toBogE has joined. 00:43:06 all the bizarre bits of Haskell suppoert one another 00:43:17 without type inference, monads are useless 00:43:29 hell, without types they are useless 00:43:55 i am not quite sure of that. 00:44:44 GregorR: scheme scheme scheme scheme 00:44:46 you _could_ have objects with a bind method. 00:46:32 But it is a PITA to have to write type sigs all over the place 00:47:10 >> is ploymorphic remember 00:47:12 i am talking about in a dynamically typed language 00:47:39 >> would call the bind method of its left argument. 00:47:51 but think of all those functions which work for any monad 00:48:01 how would you do those? 00:48:29 liftM :: a -> b -> m a -> m b 00:48:51 that's not right 00:49:01 yes it is 00:49:28 well... 00:49:30 forall a1 r (m :: * -> *). (Monad m) => (a1 -> r) -> m a1 -> m r 00:50:14 or, in pseudo-Haskell: 00:50:20 clone :: a -> a 00:50:27 anyway, liftM f x = x.bind(\t -> return (f x)) 00:50:31 write the type signature for *that* 00:50:46 oerjan: I know that, but that is verbose 00:51:12 well, first you have x >>= f = x.bind(f), of course. 00:51:32 I see things like: 00:52:08 readThingy >>= liftM (+2) >>= writeThingy 00:52:49 in the IO monad that is: 00:53:12 -!- immibis has quit ("Download IceChat at www.icechat.net"). 00:53:22 -!- toBogE has quit (Read error: 104 (Connection reset by peer)). 00:53:26 SimonRC: i meant to make liftM f x a _function_ 00:53:33 defined by the right hand side 00:53:50 *liftM 00:53:51 ah, wait, i can see how that might work 00:54:07 i actually thought about this before a bit 00:54:32 It is occasionally handy to be able to dispatch on return type 00:55:05 yes, that is hard to get. also, this method works only for monads strict in the left argument of >>= 00:55:18 but it does work for a number of monads. 00:55:21 How would one go about writing enumFromTo? 00:55:43 :: forall a. (Enum a) => a -> a -> [a] 00:55:43 that would be a method too, of course. 00:55:57 ah, I can see how this works 00:56:08 Scala has operators as methods of their first argument. 00:56:40 it also has a bit of comprehension syntax, which is thinly disguised monads. 00:56:56 although the type system doesn't support the full concept. 00:57:15 ok, now a pathalogical example: "makeIntoZeros = map (\x -> 0)" 00:58:04 :: forall a, n. (Num n) => [a] -> [n] 00:58:09 btw you can /msg lambdabot 00:58:11 does haskell allow variadic functions? 00:58:25 bsmntbombdood: kkinda 00:58:27 bsmntbombdood: in some cases you can do it with type classes 00:58:34 printf exists, for example 00:58:44 it conflicts interestingly with currying 00:59:12 basically, the final result of the function cannot be a function, i guess 00:59:21 ah, no... 00:59:46 the problem is if there is a type in the return value that cannot be deduced from the arguments 01:00:01 hm? 01:00:17 e.g. 0 :: (Num n) => n 01:00:17 printf is polymorphic on the return value :D 01:00:40 oh, i thought you were still talking about variadic functions 01:01:03 ad many of the types that are in one sense types of arguments end up as part of the type of the return value when you start currying. 01:01:11 e.g. readThingy >>= liftM (+2) >>= writeThingy 01:01:34 the monad type does not appear in the single argument to liftM 01:01:39 but it does in the return type 01:02:11 as i see it, the monad is found from the first object in the >>= chain which is not return _ 01:03:01 and obviously you lose majorly if you get rid of currying 01:03:35 another idea: 01:03:35 well, getting rid of currying was not part of the original specification :) 01:04:11 if you allow currying the Java and C# programmers will kill you 01:05:16 suppose you have a function getStream :: m a -> m [a] 01:05:57 and because it is used deep inside an abstraction, for elegance you want to pass in "return 0", which eventually gets passed as the first argument of getStream... 01:06:30 you have a naked return, so you must specify the type somehow 01:06:51 any any hard-coded type will reduce generality 01:07:34 well, there _would_ have to be default return(x) objects 01:07:48 "default"? 01:08:08 which would know how to insert themselves into a >>= chain 01:08:16 ouch, hack 01:08:36 getStream f = do { x <- f ; xs <- getStream f ; return x : xs } -- I think 01:08:37 no worse than having numerical conversions 01:09:53 basically, you are making the wrapped Identity monad a supertype of the others. 01:12:09 er, subtype 01:12:37 What's all this about? 01:13:01 how much of monads can be done in a dynamically typed language 01:13:36 with code polymorphic over the monad 01:31:09 -!- ehird` has quit (Read error: 104 (Connection reset by peer)). 01:31:18 -!- marvinbot` has quit (Remote closed the connection). 01:47:10 -!- immibis has joined. 01:49:07 -!- toBogE has joined. 01:50:23 -!- toBogE has quit (Read error: 104 (Connection reset by peer)). 03:09:30 -!- ihope has quit (Read error: 110 (Connection timed out)). 03:36:30 -!- GreaseMonkey has joined. 04:14:11 -!- andreou has quit (Read error: 113 (No route to host)). 04:34:14 -!- oerjan has quit ("Good night"). 05:14:30 Hey, if you read() from a Reader, does it always pull in the next byte? 05:14:35 -!- Arrogant has joined. 05:14:41 um 05:14:46 yes, I do believe 05:15:27 actually, that reads a *character* 05:16:00 wait... wtf 05:16:06 this javadoc is confusing. 05:16:11 It reads an int, actually :P 05:16:18 "The character read, as an integer in the range 0 to 65535" 05:16:25 what the hell does that mean? 05:16:27 Grr. 05:16:35 obviously, it IS an int, but... wait 05:16:47 It's the integer representation of the next unicode character. 05:16:49 this may be because Java has builtin Unicode support 05:16:50 yeah 05:18:07 Now I have to like, completely redesign half my classes. 05:18:09 Hooray. 05:18:20 why the refactor? 05:19:05 Wait no, I don't. Only 1 class I need to redesign. 05:19:17 phew 05:19:59 I need to use an InputStream now, so I have to make sure the bytes are converted to their appropriate types before I have the classes perform the internal magic to represent the types I need. 05:20:25 Can you test against bytes like (blah == -1) ? 05:20:31 Or do you have to cast to int? 05:20:50 you should be able to make the comparison you have above 05:22:04 on an unrelated note, I've come up with a bunch of monsters and things for the player and fluffy, his faithful genetically engineered pencil-sharpener, to face in my RPG: http://rodger.nonlogic.org/images/CRPG%20combat.png 05:23:25 ideas not shown here include staple removers, peeps(TM) candy and the ghost of Edsgard Djikstra. 05:23:41 I want the Djikstra! 05:24:07 lol 05:25:31 Also, if you cast byte to char, does it do the auto-conversion for you? 05:25:40 Djikstra's attacks will include "Shunting yard", "FOR loop", "A case against the GO TO statement" and "exhaustive proof" 05:26:00 This primitives business is what really confuses me. I'm so used to C primitives ;-; 05:26:14 char literals are dealt with internally as if they instantly become integers 05:26:22 Aha. 05:26:43 that's how I always think about it- single-quotes are just an alias 05:27:18 so (57 == '9') is true 05:27:29 I haven't written ASM in a *looong* time. 05:27:37 I want to do a low level project. Methinks an emulator. 05:27:43 in Java? 05:27:47 In C :P 05:27:52 eeew 05:28:14 You can't write a substantial emulator in a high-level language and expect it to be fast, though. 05:28:49 Plus you need cheap bit-flipping hacks that is total C-lurv :3 05:29:02 you can't write anything in java and expect it to be fast, that includes emulators 05:29:57 Funny how a byte-code compiled language can't be fast, no? 05:30:09 OSS anti-Java stigma, when unfounded, is funny. 05:30:22 at least, not on my computer 05:30:46 I'll bet my machine is worse than yours. 05:30:59 I'm with Sukoshi on this one, immibis- Java has a tremendous amount of technology behind it to *make* it fast, even when it's innately at a disadvantage 05:31:08 JavaC is a fantastic piece of code. 05:32:22 really? i must have a slow computer then 05:32:31 * immibis checks in System Properties 05:32:50 2GHz, 248MB memory 05:33:19 there is no excuse to have that little RAM. It's a travesty. 05:33:26 immibis: 1.6 GHz, 256 MB. 05:33:44 ;D 05:33:53 ram IS NOT EXPENSIVE. It's the most affordable upgrade you can make to your computer these days. 05:34:11 8MB is used by onboard graphics 05:34:27 i think there is actually 256MB in the box 05:34:44 I have a slower computer, and yet it runs fine. 05:34:55 You even run Windows, and the Linux JVM has historically been known for being crappy. 05:35:21 the sun jvm or gij? 05:36:10 Sun. 05:36:40 I can attest to this- applet compatibility on linux is absolute shit 05:37:04 unreliable keylisteners, improper graphics buffering, and a host of other intermittent problems 05:38:53 I've had numerous programs run on OSX and windows flawlessly, and then utterly fail when I test them out on one of the fedora-based lab machines up here in the CS department 05:39:25 It's gonna improve now that Java is OSSing the thing. 05:39:33 ...in theory. 05:39:57 Never doubt the power of horrendous numbers of OSS coders. 05:39:59 OR, we'll wind up with a ton of slightly broken and weird forks of the language 05:40:14 Read the GNU Classpath mailing list. It's *really* active. 05:40:23 "Woo I should add operator overloading to Java FOR NO REASON! Whoopeee!" 05:40:37 i reckon they'd have a fork of Java with built-in "Hello World!" support 05:40:46 Java 5 did enough bad things. :'( 05:40:58 But Java 1.6 really upped Linux VM awesomeness. 05:41:07 Much faster/lighter on the memory. 05:41:21 the apple can only fall so far from the tree 06:01:44 getting off now, cya 06:02:13 -!- GreaseMonkey has quit ("custom quit messages --> xchat.org <-- hydrairc sucks"). 06:04:47 Although I used to find it aggravating in the beginning, now I'm starting to like Java's restriction of one class per file and the class should have the same name as the filename. 06:05:04 I remember hunting typedefs in large globs of C codes and shuddering. 06:06:55 :) 06:07:32 although in cases where it makes some sense (like non-public classes), it *is* sometimes possible to have more than one in a file 06:11:07 'later everyone- I require sleep 06:11:16 -!- RodgerTheGreat has quit. 06:56:43 could someone please indicate what is wrong with the following bf program: +[,>[-]+.<[.,]+.[-]+++++++++++++.---.] 06:57:31 it is supposed to read from standard input until end-of-file and echo it putting the character with code 1 before and after it 06:57:50 in other words, it is meant to translate plain text into a CTCP request when run on EgoBot as a daemon 07:04:50 why so complicated? 07:05:14 +.>,[.,]<+ 07:05:23 left over from an earlier revision 07:05:30 s/<+/./ 07:05:32 wait a second... 07:05:37 what about the CRLF though? 07:15:18 -!- Arrogant has quit ("Leaving"). 07:59:59 -!- clog has quit (ended). 08:00:00 -!- clog has joined. 08:00:09 -!- sebbu has joined. 08:58:48 !daemon ctcp bf8 +[.[-],[.,]+.++++++++++++.---.] 08:58:52 !ctcp ACTION blinks 08:58:54 ACTION blinks 08:58:55 !ctcp ACTION blinks 08:58:58 ACTION blinks 08:59:03 !undaemon ctcp 08:59:06 Process 1 killed. 08:59:24 !daemon ctcp bf8 +[.[-],[.,]+.+++++++++.] 08:59:28 !ctcp ACTION blinks 08:59:30 ACTION blinks 08:59:37 !ctcp ACTION blinks 08:59:40 ACTION blinks 08:59:48 anyone know what is happening? 08:59:51 !undaemon ctcp 08:59:54 Process 1 killed. 09:00:27 !help usertrig 09:00:30 Use: usertrig Function: manage user triggers. may be add, del, list or show. 09:00:54 !usertrig add ctcp bf8 +.,[.,]+. 09:00:56 Trigger added (ctcp)! 09:01:02 !ctcp ACTION blinks 09:01:04 * EgoBot blinks 09:01:05 !ctcp ACTION blinks 09:01:08 * EgoBot blinks 09:01:14 well, that works. 09:01:27 !daemon cat bf8 +[,.[-]+] 09:01:34 meow 09:03:36 meow 09:06:05 !cat 09:06:07 !dog 09:06:10 woof 09:06:49 !goat 09:06:52 woof 09:06:57 why does the goat go woof? 09:10:33 -!- immibis has quit ("I cna ytpe 300 wrods pre mniuet!!!"). 10:19:10 "If you are caught downloading copyrighted material, you will lose your ResNet privileges forever.", then, later on the page, "Copyright © 2005 by the University of Kansas". Ouch. 10:32:18 ? 10:33:30 http://www.resnet.ku.edu/ 10:33:40 Ach, du lieber! <<< OMG, rather you? 10:33:48 funny funny 10:36:04 hmph, why is everyone gone when i need them 10:36:23 okay, i admit i did't need oerjan that much 10:44:44 -!- ehird` has joined. 11:00:34 -!- jix has joined. 11:03:50 where's that video about procedures in c2bf again? 11:20:58 -!- Cesque has joined. 11:22:12 -!- Cesque has quit (Client Quit). 11:48:05 -!- andreou has joined. 11:53:27 damn 11:53:37 made a language with static typing 11:54:04 (to be continued...) 11:54:24 eh 11:54:30 actually, i solved my problem 12:05:46 -!- jix has quit (Nick collision from services.). 12:06:00 -!- jix has joined. 12:36:55 -!- RedDak has joined. 12:40:01 -!- andreou has quit ("Leaving."). 12:48:29 -!- oklofok has joined. 12:48:56 so okay, i make a language, then try creating i using s, k and i -combinators. 12:49:11 WHY CAN'T MY I COMBINATOR USE ITSELF RECURSIVELY??? 12:49:23 this kept me occupied for quite a while 12:49:28 i'm no great <3 12:50:12 (don't use recursion if you don't know it or just happen to be a miserably failish person.) 12:50:23 (is the lesson here) 12:53:05 -!- ololobot has joined. 12:53:31 >>> ul `ii 12:53:33 -> i 12:54:07 >>> numbda s={a->{b->{c->(a!c)!(b!c)}}};k={a->{b->a}};i={a->s!k!k!a};i!7 12:54:08 num:7 12:54:24 the i combinator via ``skk in numbda 12:55:09 (the language i created to make possible to make lambdas using parenthesis while still having them for normal grouping) 12:55:26 and no, this feature hasn't been done yet 12:55:36 and yes, i know no one is interested in whether it is 12:55:48 and now, gonna eat something funnish -> 12:56:27 >>> numbda 5+4-5*2 12:56:28 num:-1 12:56:46 crack it if you wish, tell me if you do 12:56:47 -> 13:00:56 hmm 13:01:50 -!- oklobot has joined. 13:01:52 hihi 13:02:01 !help 13:02:04 help ps kill i eof flush show ls bf_txtgen usertrig daemon undaemon 13:02:05 oh 13:02:07 1l 2l adjust axo bch bf{8,[16],32,64} funge93 fyb fybs glass glypho kipple lambda lazyk linguine malbolge pbrain qbf rail rhotor sadol sceql trigger udage01 unlambda whirl 13:02:08 ah 13:02:25 !exec 5 5 3AddAddPrntnl 13:02:28 Huh? 13:02:32 hmm 13:02:38 !exec 5 5 3AddAddPrntNl 13:02:39 ah 13:02:39 13 13:02:40 Huh? 13:03:28 oklobot sucks, i just wanted 4 nicks here for the hell of it 13:03:41 now, retry at the going away thing -> 13:03:47 -!- ihope__ has joined. 13:04:12 -!- ihope__ has changed nick to ihope. 13:28:26 numbda looks like oklotalk 13:29:21 wait how does egobot do befunge 13:29:23 multiple lines 13:35:11 source file url 13:35:11 is there a precompiled binary of fukyorbrane for windows anywhere? 13:35:19 and i'm not gone 13:35:20 why... 13:35:22 :< 13:37:32 =p 13:37:40 hmm 13:37:51 FukYorBrane combined with self-replicating brainfuck? 13:38:01 you could easily replace an opponents code with your own. 13:38:10 or similar weirdness 13:40:10 ololobot has a new language now 13:40:13 just added 13:40:19 >>> bs 33<11<=!Hello> world>: 13:40:20 Hello, world! 13:40:25 yay :) 13:40:36 nice to extend, that one 13:40:49 now, perhaps, i'm going -> 13:52:14 one thing i don't understand about bf function calls like in c2bf 13:52:49 is that the only way to call a function is to put the function id in the current cell, and then >end the loop< (i.e. return from the current function.) so how do you handle my_function() { a_func(); more_code; }? you'd return right after a_func 13:53:05 and you can't use a call stack since you can't represent a certain part of a function 14:18:26 do many brainfuck compilers optimize x[x] to a do..while? 14:29:35 Oh my. 14:29:54 what 14:30:03 oklofok: what's that language? 14:30:12 -!- RedDak has quit (Remote closed the connection). 14:34:04 hmm, x[x] optimization could really speed up some code 14:49:24 * ihope ponders 14:49:59 ponders what 14:50:16 Ponders how to write that without x being present twice. 14:50:26 Oh 14:50:51 ... i think it'd be hard 14:50:58 which is why lovely compilers should do it for us! 14:51:14 * ihope nods 14:51:44 Perhaps AI means a good compiler. 14:52:10 * ehird` ponders writing a bf-to-c compiler in C, optimizing - yeah it's been done before, but they're short affairs, and you can optimize so much in BF 14:52:49 (wow -- i'm stupid, i just realised that cell-wrapping is just modulo 256) 15:03:05 -!- oerjan has joined. 15:34:44 i might write that bf compiler. 16:32:05 ihope: numbda 16:33:35 what is numbda 16:33:43 (ihope) oklofok: what's that language? 16:33:44 oh 16:33:48 oklobot's language? 16:33:53 that one i call oklobot :) 16:33:58 or The Oklobot Language 16:34:07 >>> bs. 16:34:12 oh 16:34:13 ah 16:34:18 that's a language of my friends 16:34:55 it's kinda like brainfuck, except you have bitwise logic and basic arithmetic for adjacent cells 16:35:04 is there a page on the wiki describing most of the good brainfuck-compilation optimization techniques known? 16:35:24 my friend's knowledge about esoteric languages is pretty much limited to brainfuck 16:35:33 >>> bs 33<11<=!Hello> world>: 16:35:34 Hello, world! 16:35:36 That's numbda? 16:35:41 no 16:35:42 ehird`: that while -> do while thing isn't possible in general, methinks 16:35:48 oklofok, why not? 16:35:48 ihope: that's b00tstrap_ 16:35:55 * ihope nods 16:36:13 just match on a parse tree x[x], where x is matched as what's in the [], then convert 16:36:16 because you can't keep a cell for the while in store if you don't know where in memory x will land 16:36:32 oh 16:36:35 you mean, optimizing that 16:36:36 ah 16:36:39 yes 16:36:42 i was thinking about what ihope said 16:36:47 and answered to him, actually 16:36:56 well you said "ehird: that while -> do..." 16:37:10 i did, because i forgot who asked what. 16:37:12 anyway 16:37:20 what i mean is, instead of x[x] being e.g. x; while(*p){x} it's do{x}while(*p) 16:37:21 optimizing that is just a stirng match 16:37:24 yes 16:37:27 *string 16:37:32 or a parse tree match for more advanced compilers :P 16:37:46 essentially the same in the case of brainfuck 16:38:00 maybe x[xy] could be optimized too 16:38:02 because in brainfuck you can't play with syntax 16:38:07 that is if x isn't just one character or something silly 16:38:32 um, wait, no. 16:39:02 >>> numbda "Hello, world!" 16:39:03 Hello, world! 16:39:16 i realized my static scoping is broken when i was eating 16:39:31 recursion in general will not work 16:39:55 but you can't notice it yet, really, since there aren't control flow operators to make recursion usable 16:40:08 i also think that the algorithms to set the ptr to a certain value can be optimized 16:40:15 things like copying, too 16:40:31 you just need either some heuristics or some hard-coded snippits to optimize 16:40:39 you mean [-]+++++ can be made into cell=5 16:40:40 ? 16:40:46 yes, and: 16:40:53 i think my brianfuck compiler does that 16:40:56 *brainfuck 16:40:57 (wait) 16:41:05 i will 16:41:23 yes, and: [>+<-] can be optimized too 16:41:36 it's *p = *(p - 1); *(p - 1) = 0; 16:41:41 my brainfuck compiler optimizes that methinks 16:41:47 to do it completely requires solving the halting problem of course 16:42:03 but you can try some heuristics, and use hardcoded optimizations for a few ways. 16:42:16 any [] that has right_moves-left_moves==0 can be completely optimized. 16:42:22 and my compiler does that methinks 16:42:25 don't remember 16:42:30 also you can optimize every single one on http://esolangs.org/wiki/Brainfuck_constants :) 16:42:34 if i actually implemented the last optimization 16:42:39 heh 16:42:46 oklofok, and that has no IO right, you mean :) 16:42:55 IO right? 16:43:02 ah 16:43:04 yes 16:43:05 "and that has no I/O, right" 16:43:11 and, how do you do it? 16:43:16 do you interpret it at compile-time? 16:43:20 err 16:43:24 otherwise nested loops suc hthat r-l==0 might be hard.. 16:43:38 you just sum up the +'s and -'s for each level 16:43:44 ah, right 16:43:49 but... 16:43:53 oh 16:44:08 and an optimized [] will just be a list like [ccell-4]+=4, [ccell]-=3 16:44:21 so [+++[---]] would be compiled as while (*p) { *p += 3; while (*p) { *p -= 3; } } 16:44:31 i was thinking you'd flatten the loop somehow and i was confused 16:44:43 err, [---] would be optimized as [-] = NULLIFY 16:44:47 well yes but 16:44:52 i mean in the context of this optimizations 16:44:53 [+++NULLIFY]==[-]= nullify 16:44:56 oh 16:45:34 you can flatten a thing like [+-+-+-+->-+-+-++---->-+-++-<--+--<<-+++-<-+++++++>] 16:45:41 err 16:45:49 [+-+-+-+->-+-+-++---->-+-++-<--+--<>-+++-<-+++++++>] 16:46:16 and nullifications can usually be there as well and can be optimized 16:46:34 but that's it of course 16:46:38 so [>++<-[+>-<]] would be while (*p) { *(p + 1) += 2; *p--; while (*p) { *p++; *(p + 1)--; }} right 16:46:52 yeah 16:47:12 but that's a pretty obvious optimization anyway 16:47:16 err 16:47:21 of course, i was wrong there 16:47:23 i see these optimizations would be much easier with the code as a nested list (for loops) and a language with pattern matching ;) 16:47:30 this would be quite verbose in C 16:47:34 Did somebody say Haskell? 16:47:44 ihope, no SML love? 16:47:45 a non recursive one with num(>)-num(<)=0 can always be fully optimized 16:47:56 i mean, with no nested []'s 16:48:00 I've hardly heard of SML. 16:48:05 but obvious obvious, that doesn't really help 16:48:06 ihope, i think it looks nice 16:48:09 i haven't used it much 16:48:20 Related to ML, probably. 16:48:25 yes 16:48:27 SML = Standard ML 16:48:38 ehird`: it isn't verbose 16:48:44 oklofok, :) 16:48:50 and of course you have the code as a nested list 16:48:51 oklofok, what about initialization optimizations 16:48:53 *optimization 16:49:06 oh 16:49:12 you mean stuff like constants? 16:49:16 they can be precalculated 16:49:19 of course 16:49:20 >+++<- at the start of the program makes e.g. the char tape[3000] be char tape[3000] = { 255, 3 }; 16:49:31 instead of tape[3000]; 16:50:40 well yes, of course 16:50:57 the only thing that can't completely be optimized is stuff where a part of a code uses a cell whose value isn't surely known at that point 16:51:31 so everything done before an input can trivially be encoded in the starting patterns 16:51:34 *pattern 16:52:26 you mean, things like >+++<->[code] is optimized as code not being conditional at all? 16:52:39 i mean 16:52:41 at root level that is 16:52:49 a program that doesn't take input is optimized into it's result. 16:52:54 if you do compiling/optimizing. 16:53:01 no matter what that program is? 16:53:07 no 16:53:09 a factorial program with a fixed input would be evaluated at compile time? 16:53:14 that doesn't take input. 16:53:15 yes 16:53:20 but that, at compile time, is insane 16:53:23 you're not writing a compiler 16:53:24 errr 16:53:26 okay... 16:53:38 i see it as the best optimization possible. 16:53:39 you're writing an interpreter which sometimes delegates input to the code outputted by it 16:53:53 seriously, no compiler would run a whole factorial program and then just compile the result 16:54:06 well, i'm not talking about a compiler 16:54:15 i'm talking about what you *can* optimize away 16:54:24 i don't care about what's actually feasible 16:54:34 the "optimization" you have described has a name it's called interpretation :) 16:55:10 a-ha 16:55:14 interpretation really just optimizes source code into a more optimal form - it does a pretty good job, too - it produces output requiring no computation. :-) 16:55:24 you can't compile, run, recompile because... you'd get scared? 16:55:35 hmm 16:56:05 if a code always produces the same input, the best optimization is to have it just return that input 16:56:12 ... 16:56:13 output 16:56:14 sorry 16:56:35 Yes, and that falls under the subclass of optimizations known as "interpretation" 16:56:43 if you don't want to optimize that because of your ideology, that's fine 16:56:50 but do not start bugging me about it :) 16:57:00 However, interpretation is generally not a good optimization for a compiler to perform, as compilers are designed to generate code which goes through the optimization process of interpretation 16:57:06 Doing it before the output defeats what a compiler is meant to do. 16:57:22 i'm not bugging you :) just saying 16:57:22 aha, so you can't optimize constants 16:57:35 you said you would like it to do that earlier 16:57:37 you can, because that is not interpretation in its strictest sense 16:57:44 i'm not sure where we went a different way. 16:57:47 (Really, everything is interpretation. But, let's think of it stricter) 16:58:03 we went a different way when you said that all programs without input should be optimized fully to their output 16:58:09 because that is interpretation in its strictest sense :) 16:58:21 i said that's how far you get in optimization 16:58:25 doing less is fine 16:58:33 it's just you can choose any level between 0...that 16:58:37 for optimization 16:59:16 any loop that always just the same thing can be optimized, that's the most basic idea of optimization, you can choose to optimize it away fully, or just optimize some of it 16:59:45 i'm just saying there's nothing superturing about optimizing code that produces the same output every time 16:59:49 and it's trivial 17:00:17 sure. 17:00:51 i know you mean you want +++++(<- input there) [code to calculate f(x) for any x indicated by the number of +'s in the beginning] to actually just have the loop optimized 17:00:58 so that the first +'s could be changed 17:00:59 my definition of a very-highly-optimizing compiler is that it optimizes up to everything but complete interpretation - the point of a compiler, IMO, is to produce code which you can then apply that final optimization on 17:01:14 and it would have the same functionality, just change it's first few bytes 17:01:24 this is impossible. 17:01:28 you can't know which + 17:01:29 ---- 17:01:40 you can't know which +'s in the code are input hardcoded by the programmer. 17:01:49 so you can't optimize anything. 17:02:01 and i know i'm not being clear :) 17:02:08 kl 17:02:12 optimization, is all about heuristics 17:02:33 true optimization - to make code completely "optimal" - is impossible. 17:03:16 yes, you can't optimize fully a code that can take infinite input 17:03:26 i mean, any lenght input that happens to be given 17:03:44 but you can always trivially optimize anything that does not take input 17:03:52 unless you have ideological problems with that 17:03:56 as you seem to have 17:04:00 nah 17:04:10 i think our definition of input is mixed up 17:04:13 i don't care about that stuff, i just care about the fact you can optimize a constant. 17:04:26 by input you also mean hardcoded input, i know 17:04:30 i said that earlier 17:04:33 or was i wrong?= 17:04:39 there is a subtlety if your non-input taking expression doesn't terminate. 17:04:46 oerjan, exactly 17:04:53 ah 17:04:55 compilation in code without errors should ALWAYS succeed 17:04:57 even if it doesn't halt. 17:05:07 sorry about that 17:05:23 if it doesn't terminate quickly, of course you can't optimize it 17:05:25 fully 17:05:32 define quickly 17:05:38 yes 17:05:42 err 17:05:45 oh 17:05:50 in a feasible time 17:05:54 * ehird` does the halting problem dance 17:05:58 you define it when you make your optimizer. 17:06:44 also, there is a subtlety if the result is actually much larger than the expression creating it, and isn't always used. 17:06:47 anyway, i just meant constants, and a program taking no input can always be optimized into it's result if you have it's result 17:07:05 so i could have some code that takes hours to compile but less than a second to run 17:07:21 oerjan: stop making points :) 17:07:41 also i could have code that, just because it takes a long time to execute, is denied optimization -- Oh a-ha! This can result in /different output for the same input on different machine specs/ 17:07:45 Which is fundamentally wrong 17:07:49 oklofok: i am saying, partial evaluation is a well-known optimization technique but it has limits. 17:07:49 ehird`: if it takes an hour to compile, it takes an hour to run 17:07:53 that's obvious 17:08:03 oerjan: yes, but i didn't think of that 17:08:15 stop being cleverer than me, is my point :D 17:08:21 sure but i might want to have some sort of automatic build process so people working on something can test the code 17:08:28 if its run at build time they can't 17:08:56 huh? 17:09:07 anyway 17:09:43 though i was wrong about the fact you can always optimize a non input taking program, which i now find very very dumb, i was right in saying if you can do it, you should 17:09:46 what's so huh 17:09:57 a team of people are working on software A 17:10:00 of then you are just making a bad optimization for fun 17:10:05 they agree to each test each new release 17:10:22 so, automated program B compiles the new version of A, so that the team can test it (hint: it has a bug - it loops forever!) 17:10:26 and of course, true, you shouldn't optimize if the output is very complex compared to the code 17:10:38 however the compilation process runs on the automated program, so each coder only gets the output produced 17:10:40 in whic case you just optimize some parts 17:10:41 they cannot test the software. 17:10:51 define "very complex compared to" 17:10:52 oklofok: never! especially when i am having trouble with #haskellers outclevering me :) 17:10:57 in algorithms. 17:11:08 oerjan: i'll become better then, okay? 17:11:14 ehird`: longer. 17:11:28 define longer 17:11:41 len(code)>len(memory state) 17:11:53 is that: 17:12:05 code being the unoptimized code, memory state being after the run 17:12:09 string:length(compile(code)) > string:length(compile(memory state))? 17:12:30 if so, you could have some really complex code that doesn't get optimized just because of its output size -- this seems like a bad heuristic 17:12:46 ehird`: so you want an optimization that's still possible to turn into the original brianfuck code? 17:12:57 (AND, of course, you get a longer compile time) 17:13:07 (Since it has to compile BOTH (running one segment of code that may be complex), THEN compare the results) 17:13:08 i get thta impression from teams-working-on-something example 17:13:11 *that 17:13:19 (If it decides against optimizatin, then it has to execute AGAIN at run-time - zzzz snore) 17:13:52 ehird`: compiling oughtta be fast? 17:13:58 relatively. 17:14:13 relative to what? 17:14:41 errr 17:15:14 you mean if the original program runs T seconds, and the compiler runs U seconds, the resulting code must run <= T-U seconds? 17:15:24 i can't think of another criteria 17:15:47 hmm 17:16:11 i'm not sure where i got that impression, you never said anything about a criteria 17:16:55 anyway, i don't see how a compiler shouldn't try to run the code fully 17:17:21 because of speed 17:17:24 that's just silly. 17:17:54 well, then why compile at all? :) 17:18:01 ? 17:18:10 to make the program faster? 17:18:28 the compiler runs once. 17:18:37 for one piece of code 17:19:01 if you do precompilation, of course you don't optimize even +++>--<++ 17:19:07 in the beginning of the program 17:19:17 it's faster just to execute one instruction at a time. 17:19:24 err 17:19:30 i mean, if you do interpretation 17:19:38 s/precompilation/interpretation 17:19:55 if you interpret the code, then my arguments about this have been wrong 17:20:01 but you were talking about compilation. 17:20:12 unless you have mixed the to concepts 17:20:14 *two 17:20:24 *confused the two concepts 17:22:23 no 17:22:35 anyway a compiler is an interpreter and an interpreter is a compiler. 17:22:58 a-ha 17:23:07 Wow, a BF compiler that warns if < and > aren't balanced... 17:23:26 errr 17:23:33 sounds like a sucky compiler :P 17:23:36 exactly :) 17:23:37 very lame 17:23:43 does one exist? 17:23:45 you mean? 17:23:46 yep 17:23:49 http://home.arcor.de/partusch/html_en/bfd.html 17:23:49 :\ 17:24:08 okay... well guess you often have them balanced 17:24:28 but i'd prefer syntax highlighting for those loops that have them balanced 17:24:50 a stack in brainfuck is 1 (item 1) ... 0 isn't it? 17:25:08 errr 17:25:16 [1][1][1]...[0]? 17:25:19 no 17:25:25 [1][my item][1][my item][0] 17:25:33 ah 17:25:40 and you navigate it with [>process item>], and push with [>>]+>(CALCULATE VALUE HERE) 17:25:47 (assuming you're on the starting 1) 17:25:58 well, you can't really ask "what a stack is in brainfuck", but yes, i've done stacks that way, usually 17:26:01 and pop with [>>]<<->(USE VALUE) 17:26:10 well, i meant what's a common, kinda-efficient way :) 17:26:33 yeah, then i'd say that 17:27:08 if you use multiple stacks, you might wanna have them interleaved 17:27:09 the initial 1, of course, is to seperate stacks 17:27:09 of course 17:27:17 so two stacks, non-interleaved is: 17:27:35 [1][item][1][item][0][1][item][1][item][0] 17:27:51 yeah 17:27:58 whereas [item][1][item][0][item][1][item][0] is ambigious, depending on where you start etc 17:28:18 and a cell for index carrying if you do random access memory 17:28:21 i mean... a vector 17:28:34 you mean, a "where I am"? 17:28:36 like 17:28:37 [1][value][for calculation][1][value][for calculation][1][value][for calculation][0] 17:28:42 oh, right 17:28:50 those are always 0 but can be played with 17:28:51 also 17:28:54 so like, you do all your destructive operations involving value in [for calculation] 17:28:55 you can use the 1-cell for that 17:28:58 so as not to disturb it 17:29:03 and then make it one after your calculation 17:29:16 (What if you need more cells? Sounds a bit silly... maybe there's a better way) 17:29:20 Well, i guess one cell is goodo 17:29:22 yes, but i just realized you can use the 1-cell for that 17:29:32 for calculation 17:29:34 ah 17:29:39 unless you do brainfork 17:29:42 you mean, use the interspersing [1]s? 17:29:50 and then do [-]+ once you move it out of the way? 17:30:16 so, you pop off the stack, compute a little bit, move that barrier cell to the top of the stack, go to that cell, repeat 17:30:20 until you're done? 17:30:42 when you move into index n, you carry n with you and each time you go one cell right in you vector, you decrease n until it's zero and you have your value 17:30:56 *your vector 17:31:07 also pushing should be [>>][-]+>[-](CALCULATE VALUE), you ned the [-]s since popped values stay on the tape, just after the end marker 17:31:29 hmm 17:31:42 i'll write a short doc explaining it 17:31:44 what i mean 17:31:49 err okily 17:32:17 i was talking about a random access vector, not a stack 17:32:25 unless i wasn't clear about that 17:32:35 which i most likely wasn't 17:37:52 this describes the stack representation i was talking about: http://pastie.caboo.se/80941 17:37:55 is it common? 17:45:10 -!- i-- has joined. 17:45:50 :/ 17:46:43 -!- trepliev has joined. 17:47:13 -!- i-- has left (?). 17:58:22 wait 17:59:54 ehird`: you want a way to get the value out of the stack as well, in some cases 18:00:06 you mean, navigate to a specific element? 18:00:09 i mean, a way to move it to the beginning of the stack 18:00:10 nope 18:00:13 yes 18:00:14 i give that 18:00:15 see the end 18:00:16 oh 18:00:25 it's <<[<<], while on a boundry 18:00:28 sorry, i didn't actually read it thorough yet xD 18:00:36 or, >[<<] on a value 18:00:43 (Well, <<<[<<] is better, but meh) 18:00:45 You get the idea 18:00:45 errr 18:00:52 you don't move it out of the stack 18:01:00 i dont understand 18:01:25 you must be able to be able to get the value from the top of the stack to somewhere completely other 18:01:35 use copy functions? 18:01:43 that's not part of the stack itself. 18:01:57 i mean, traverse the stack down carrying the value 18:02:10 so that you get it *out of the stack* 18:03:10 that's a bit harder to do 18:03:12 -!- oerjan has quit ("Dinner, probably"). 18:03:25 (but easy still) 18:06:20 yes 18:06:44 i used it when making my brainfuck-brainfuck interpreter 18:06:55 i should finish that some day 18:07:00 i was so close :\ 18:07:12 the interpreter i was making it with was just goddamn crappy 18:07:26 infinite loop -> crash, negative value -> crash 18:11:01 it was about two years ago and i was a total noob, so i'm not actually sure it would even be that much of a challenge 18:11:11 anyway, i'll follow oerjan's footsteps -> 18:11:29 (or in them, if that's the way to say it in english) 18:12:52 hey wow i managed to design a non-esoteric functional language 18:12:53 that's a first 18:12:58 and it doesn't even look much like haskell! 18:14:31 :P 18:15:38 cool 18:15:50 (me too, numbda ;)) 18:16:15 numbda is esoteric :P 18:16:22 mine doesn't look as esoteric: http://pastie.caboo.se/80953 18:16:25 though numbda wasn't really designed, it's a result of me starting to code. 18:17:47 (Also, f(x, y) is not a shortcut for f(x)(y) right now, although it is always equivilent. Thinking about adding va-args later.) 18:18:13 (Currying va-arg functions once you have already supplied enough args will require explicit curry(f, list) guess) 18:18:13 ehird`: i'd say that looks quite a lot like haskell 18:18:20 oklofok, SML is closer :) 18:18:25 SML and Haskell look eerily similar 18:18:31 but then again, haskell doesn't have a "look", really 18:18:33 (Hint: because haskell is inspired by SML) 18:18:48 oklofok, one major difference is how i always use f(x, y) instead of (f x y) 18:18:52 i like it more that way 18:19:10 in oklotalk, those two parse as the same thing, but for a different reason :) 18:19:17 :P 18:19:22 is that reason: 18:19:25 x,y -> x y 18:19:27 and (x) -> x 18:19:29 yes! 18:19:44 and f x = f.call(x) 18:19:48 so f(x, y) is f (x , y), which has x y, so it's f x y 18:20:27 (if f isn't a funcoken, that parses differently) 18:20:31 one of the advantages of my syntax is that there's no pesky left-associative-messing-around 18:20:45 (but since objoken and funcoken are my own terms, you don't know what they are) 18:20:56 also, you don't need to do e.g. (*) to get the * function (because f * x is f times x, not f (function *) x)) 18:21:03 you can just do f(*, x) 18:21:13 i think x * y binary operators will be `*`(x, y) 18:21:14 not sure 18:22:22 i love how everything like that just arises from the underlying structure of oklotalk 18:22:28 but i hate how i can't stop talking about it 18:22:33 why didn't i go eat? 18:22:43 really, i'm an irc-a-holic 18:22:51 can't live without irc-a-hole 18:23:09 (i prefer holes over hols.) 18:23:23 now, me goes -> 18:23:29 heh, i think my language has been heavily influenced by merd: http://merd.sourceforge.net/ 18:23:36 it's very similar! 18:23:47 except my language has no "if" 18:25:43 i had this idea for a language when speccing numbda 18:25:58 a language called yawn, for it's excessive laziness 18:26:12 but i just have ideas for it 18:26:58 (so basically i was just telling the name which is trying to be clever, you have fun with that...) 18:27:13 (i should filter what i say) 18:27:18 did i go? 18:27:22 --------> 18:31:36 Excessive laziness? 18:32:09 http://pastie.caboo.se/80960 i should write a spec for this, shouldn't I? 18:32:09 oklofok: do you have an oklotalk spec anywhere? 18:32:17 ihope, he only has a parsing spec. 18:32:52 ehird`: how do you curry that there? 18:45:21 ihope, you just apply to not enough arguments 18:45:25 note product -> fold(*, 0) ; 18:45:57 if you want to do va-args, when i implement va-args, then you'd have to do curry(vaFunc, [my, curried, args]) 18:46:10 (same with default arguments) 18:51:51 ihope: i was thinking there'd be two separate threads evaluating, one so lazy it evaluates nothing, and the other dependant on that 18:52:06 i have some ideas on how to make that work 18:52:12 Hmm... 18:52:14 but not enough to be interesting to tell 18:58:48 ihope, the idea for implementing my language is for it to be interpreted ONLY 18:58:51 well, most of the time 18:59:09 and to have a small C base, and as much possible written in the language itself (no matter how strained the low-level code might look in it) 18:59:35 then, another version of the base, written in the language itself - so if a compiler is ever written, you can have a self-hosted interpreter 18:59:57 Where's a C spec? 19:02:53 ? 19:03:01 you mean a spec of the C language? 19:03:04 if so, you'll have to pay 19:05:04 You have to pay to look at specifications? 19:05:37 Okay then, where's a GCC C spec? :-P 19:12:25 Or maybe I should compile for GHC if there's no reason to go with C instead. 19:13:43 gcc c spec doesn't exist 19:13:47 you have to pay iso to get the spec 19:13:53 how do you think standards agencies make their money 19:13:59 it costs $80 for C89, iirc 19:14:09 you COULD pirate it.. 19:14:21 Why not just get the C Programming Language? 19:14:27 cause that's not a spec 19:14:36 Why do you need a spec? :P 19:14:42 because ihope is compiling to c 19:14:53 Err... so? 19:15:03 Does he not know C, or something? 19:15:19 Couldn't you rewrite a spec to get an equivalent spec not protected by anything? 19:15:23 You need a spec to reliably compile 19:15:30 ihope, Yes, but it's a pain in the butt so nobody will 19:15:42 You don't want your compiler to produce invalid code in obscure circumstances. 19:17:29 http://pastie.caboo.se/80978 more examples! 19:17:31 i need to write a spec. 19:20:28 the comments on 99-bottles-of-beer are almost as stupid as on youtube. http://99-bottles-of-beer.net/language-java-1162.html "Alex Mirchev That language is definatly java.. btw, why is your code so weird... it doesnt look like a correct syntax..." 19:21:42 Also: http://web.mit.edu/kenta/www/two/beer_i_m.html "Java is a machine independent compiler based on C++ which targets to pseudo-code." "Java Script Interpretive Java." grrrrr 19:22:20 ROFL wut? 19:23:43 some people r dum lol 19:24:34 i didn't know there was a language called Microsoft Word xD 19:24:49 i know the language, however 19:24:51 that's weird. 19:37:17 What's the usual way of making a language "system-complete"? 19:38:11 ...as in being able to make all the operating system calls and such? 19:38:43 -!- atrapado has joined. 19:39:43 I guess I could reserve some identifier space for... I/O extentions. 19:40:32 write a primitive like syscall() in your target language, 19:40:35 wrap around it. 19:40:46 or, wrap around cstdlib or equiv. functions manually 19:40:53 Or do one of those. 19:44:13 -!- Sgeo has joined. 19:55:12 -!- RodgerTheGreat has joined. 19:55:27 hi guys 19:57:23 great, i was just looking for ya 19:57:28 *waiting 20:02:45 hi 20:08:24 Why am I getting a NoSuchMethodError? 20:08:40 When the thing is obviously compiling correctly, and the method exists. 20:13:21 *Chirp chirp* 20:13:44 omg chick in my soup 20:13:52 Sukoshi: i don't believe you. 20:14:30 ? 20:14:47 i think the compiler is more reliable than you 20:14:50 they tend to be quite 20:14:58 (about the method) 20:15:05 (and a bit about the chirp) 20:15:46 Well, my top Emacs buffer is viewing the method *right* now so :P 20:16:27 oh 20:16:34 then i guess you are both a bit crooked 20:18:58 ... Thanks for the help? :D 20:20:00 hey, no problem, that's what i'm for 20:22:18 Sukoshi: i can't really believe that can happen if you aren't doing something very very weird 20:23:02 Well, I've been purposefully avoiding generics because I'm not sure if GCJ supports them. 20:23:33 i see 20:23:34 So I've been doing a whole bunch of casts. 20:23:53 i like looking at code and i know some java, so if you isolate the problem, i'd love to look ;) 20:28:19 Sukoshi: GCJ? eep. Good luck debugging that thing's output. :S 20:29:51 RodgerTheGreat: I'm using Sun's JVM right now. 20:31:10 oh. hunh. 20:31:19 GregorR: How's D for writing an emulator? 20:31:22 I've never seen NoSuchMethodError. 20:31:34 GregorR: I need to use heavy pointer-foo and ASM, so. 20:31:38 RodgerTheGreat: ... :P 20:31:41 Well, shower time for me 20:31:44 sorry 20:31:48 . 20:47:11 Sukoshi: D certainly gives you heavy pointer-foo and ASM if you want it. 20:48:16 my brother got the harry potter book, he's gone all spastic 20:49:47 bsmntbombdood: I head that pretty much everyone dies 20:50:01 he just screamed 20:50:11 ? 20:50:29 my brother, he just screamed 20:50:49 yeah, that was more of a "why the fuck did he scream" question mark 20:51:11 GregorR: How's native D speed compared to C and C++ ? 20:51:18 Now I'm really going to shower, heh. 20:51:28 (Before this was shower preparation :P) 20:51:48 lol 20:52:14 Sukoshi: That sort of depends on how heavily you use the GC. You can choose to stop the GC and do manual deletion, in which case it's as fast. If you use the GC, it'll stop the world on occasion. That being said, the GC-stopping functions are in there for purposes exactly like emulators, so :P 21:04:04 what's the most noobish form of GC currently known? 21:04:05 i.e. simplest. 21:04:23 hm 21:04:39 a C programmer? :) 21:06:51 heh 21:09:38 reference counting 21:10:07 super fast and super easy, but misses cycles 21:10:08 Calling reference counting GC is an insult to GC :) 21:10:34 why? 21:12:15 reference counting works perfectly in languages without mutators 21:12:16 sometimes you can build garbage collection into the compiler around some complicated scoping rules 21:12:33 set-car! etc 21:13:40 GregorR: Yeah, I want to stop the GC. 21:14:23 GregorR: Got any good tutorials on it? 21:14:28 For a C/Java/Lisp/ST er? 21:26:15 i mean non-referencecounting 21:26:27 ref counting is simple but ineffective for e.g. circular objects 21:40:16 I found an IBM model M! 21:40:33 some heathen was going to throw it away 21:40:41 tut tut 21:40:47 now I must find a USB adapter to plug this beauty into my mac 21:40:51 ow 21:40:53 model ms are nice 21:40:55 but not for macs! 21:41:06 that's like, harsh dissonance in hardware form, man! 21:41:09 USB adapters exist 21:41:10 :P 21:41:39 Model M + OSX: beautiful interface for your eyes, and beautiful interface for your hands. :D 21:54:00 Circular objects... 21:54:14 * ihope ponders 21:54:25 Yes, there's sort of failure there. 21:55:23 how come? 21:57:01 Well, if an object contains a pointer to itself, but nothing else contains a pointer to it, the reference counter is still 1. 21:57:23 ihope, Well duh 21:57:31 that's why ref counting is not usable 21:57:40 Python only uses it with hacks (circular detection) 21:57:55 ihope: then there is a pointer to it, let the poor object be, he obviously wants to live 21:57:59 It's sort of like determining whether an object is supported based on whether there's something directly under it. Put something under itself, and boom, support. 21:58:06 oklofok: :-P 21:58:43 bbl- food 21:58:44 ihope: are you implying i'm not strong enough to lift myself in the air? 21:59:16 oklofok: don't jump; you'll get garbage collected. 21:59:47 who wants to help design an analog computer rube goldberg machine 22:00:04 ehird`: been my plan for years :) 22:00:13 oklofok, then help design its fruitition :) 22:00:23 i just somehow feel that can't be made over irc :) 22:00:28 design, sure 22:00:31 it can be designed over the internet 22:00:39 boolfuck! 22:00:40 plus final plans can be made and a guide to make your own 22:02:26 i'm kinda sleepy 22:02:33 :P 22:02:33 wonder if i should sleep 22:02:40 noooo! sleep is useless! 22:02:41 :P 22:02:48 true, it's the cousin of death 22:06:09 i do know for sure i should either do something or sleep 22:06:11 not idle here. 22:06:21 staring at the screen... 22:06:28 hmm, gonna go buy something 22:06:30 -------> 22:10:00 Model M == love. 22:10:17 hmm, how useful would a computer with a tape of 6 two-state cells be? 22:10:21 i imagine not useful for actual computation 22:10:28 RodgerTheGreat: You need an expensive one, by the way. 22:10:41 (assuming a programming language something like a highly simplified boolfuck) 22:10:50 GregorR: So? Tutorial? 22:11:20 ehird`: If you can prove that it's Turing Complete, then you can do anything in it ;) 22:12:37 well obviously 6 two-state cells isn't TC 22:12:46 but is it enough to perform some simple calculations? 22:12:48 well, i guess not 22:12:53 bitxtreme is tc 22:12:54 since, you can't store many numbers for one 22:12:58 no it isn't 22:13:00 why wouldn't that be 22:13:02 oh yeah it is 22:13:07 didn't ya read it's homepage!? 22:13:16 you're joking right 22:13:21 because that TC claim is a joke by the author 22:13:24 err, yes 22:13:30 ok good :P 22:13:36 i'm considering 6 0-9 cells 22:13:40 that'd be a bit more useful 22:13:46 errr..... don't think so :| 22:13:50 bit's are nicer 22:13:51 just a little bit :P 22:13:57 maybe i could squeeze up to MAX 20 0-9 cells 22:13:59 hihi bit 22:14:09 that should be useful for, i dunno, adding two small numbers together 22:14:19 well, you want the memory to be easily extendable 22:14:29 so you can make it tc when you get an infinite universe 22:14:45 ehird`: still easier to do base-2. 22:14:51 i mean, subtraction 22:14:56 and addition 22:15:00 (same thing) 22:15:26 yeah but 20 0-9s offer more computing potential than 20 0-1s 22:15:43 but you can make 100 0 22:15:44 ... 22:15:55 but you can make 100 0-1's easier than 20 0-9's 22:16:12 and you can actually make them calculate stuff without doing something very incredibly hard 22:16:40 well, 100 0-1's will be hard this IS a rube goldberg machine 22:17:00 heh 22:17:03 i mean, i have to incorporate tennis balls as a main part - making 100 binary registers will not exactly be easy/fun 22:17:11 :P 22:17:21 you think a 0-9 is even possible, then? 22:17:26 i do not. 22:17:31 they made a difference engine in lego.. 22:17:54 does that use 10 base for other than output? 22:17:57 i doubt it 22:18:04 but i didn't understand the pic, so... 22:18:24 anyway, wtf am i still doing here? -----> 22:18:25 well my registers will be primarily output i guess 22:18:34 err 22:18:35 yes 22:18:39 maybe, 10 output, 10 data 22:18:40 but... 2-base 22:18:42 think about it 22:18:46 ;) 22:18:47 ----> 22:19:16 10 base-2 data cells give me only 1024 combinations of state... 22:19:34 10 base-10 data cells give me 10000000000. 22:20:07 but a 10-base one cannot be used for computation, too complicated 22:20:14 now ----- 22:20:16 ---> 22:20:20 (for real this time) 22:20:29 it can be used for computation, albiet not too simply 22:20:55 though base-2 is easier, as i just need a switch 22:21:14 20 switches = 1048576 states, which is good 22:21:44 then 10 base-10 output displays. 22:23:44 -!- atrapado has quit ("zadflksdg sh fhjd"). 22:27:38 The structs in D are so ... easy. 22:27:41 It's like ... cheating. 22:29:40 how are they not easy in C? 22:30:28 Well, there are stuff you get used to like wrapping stuff in structs for type checking, or doing union/struct combos and such. 22:30:42 And this new named-struct assignment thinger is waaay cheap. 22:30:53 Whatever happened to programmer skill? :| 22:36:57 Sukoshi: invent a worse language and use that one? 22:37:24 lol 22:39:38 D is fun, but sometimes lame 22:40:02 GregorR: I'm concerned about all the stuff D takes care of for you. 22:40:09 How's the performance hit from that? 22:40:20 -!- sebbu has quit ("@+"). 22:40:31 Well, everything it "takes care of for you" you have to ask for except for GC> 22:40:31 Sukoshi: i don't think anything else than gc really affects anything 22:40:48 Wow. Really? 22:40:54 I... don't ... believe you. 22:40:58 :P 22:40:58 Thinks like dynamic array concatenation et cetera involve a malloc, but you pretty much have to ask for it. 22:41:03 Sukoshi, well, the runtime type system 22:41:05 but.. 22:41:08 Yeah, there we are GregorR. 22:41:18 Most of emulator stuff won't even deal with string concatenation and all. 22:41:28 It's just that, OOP is a godsend with that sorta stuff. 22:41:28 Doing type-checking is a fairly quick lookup into the vptr, I've never seen /anyone/ complain about the speed there. 22:41:34 Plus, you can just compile with -release to get rid of that. 22:41:57 [That is, once you're sure that you're not doing anything stupid in runtime type checking, just use -release and it all assumes it's OK] 22:42:05 it's constant time usually, that's like a negative number clock cycles 22:42:12 *of 22:42:18 What's this delegate stuff? 22:43:11 So will -release compile out the dynamic array stuff? 22:43:24 Or is there a little marker you can give static arrays? 22:44:07 And lastly, how do you interface with ASM code? 22:44:58 -!- oerjan has joined. 22:45:16 suifur: Uh, the dynamic array stuff can't be compiled out .. 22:45:17 Erm 22:45:20 Sukoshi: [above] 22:45:26 Sukoshi: But it will compile out the bounds-checking of it. 22:45:47 Sukoshi: As per interfacing with ASM, see http://www.digitalmars.com/d/1.0/iasm.html 22:46:01 When was the last time suifur even talked? :D 22:46:13 Damn you tab-completion! :P 22:48:08 use a better client 22:48:27 one that uses last-talked order for tab completion 22:49:15 Yeah. 22:49:53 a generic tab completion would be nice 22:50:05 last word used beginning with what you typed 22:50:18 l+tab=last 22:50:36 (not useful, nice) 22:51:53 -!- pikhq has joined. 22:52:57 how about just 22:52:59 =last 22:53:04 e.g. 22:53:07 hello, ! 22:53:10 or yes 22:53:15 two button irc client 22:53:21 great 22:53:30 oh 22:53:36 = last speaker 22:53:36 some text editors and word proccessors have tab completion of all words in their spellcheck database or previously typed 22:54:03 cool 22:54:10 good for them 22:54:17 wish i could do that 23:00:44 Grrr. NoSuchMethodError!!! 23:08:27 this still java? 23:08:50 i thought java methods were looked up at compile time 23:10:55 Sukoshi: sprinkle your code with assertions. 23:11:34 bsmntbombdood, the compiler uses exceptions as errors 23:12:16 Sukoshi: i thought you were java-fu 23:12:40 java boy can't even code java! 23:14:13 well, java folks CAN'T code java 23:14:15 nobody can 23:14:18 really. :) 23:17:15 bsmntbombdood: Didn't I say that I'm a C coder? 23:17:19 * ihope CTCP TIMEs himself because he doesn't feel like double-clicking the clock 23:17:20 Primarily. 23:17:31 (I mean, when it comes to static languages.) 23:17:44 Well, I've found out the error ... and it's ... weird. 23:19:08 ihope, did it for you. 23:19:12 now you can be even more lazy :) 23:19:24 :-) 23:19:53 Though my client tosses CTCP requests. 23:20:05 data LCTerm = Var Label | Apply LCTerm LCTerm | Lambda Label LCTerm; data SKITerm = Apply SKITerm SKITerm | S | K | I 23:20:14 (Never mind the fact that I used "Apply" twice.) 23:20:40 Now, continuations would probably help in compiling from LCTerm to SKITerm, though I'm not sure just how. 23:21:53 cannot imagine why. 23:21:59 hmm 23:22:06 there's a binary clock but no hexadecimal clock 23:22:08 somebody fix that 23:22:18 Maybe delimited continuations. 23:22:20 compile (Apply t1 t2) = do t1' <- compile t1; t2' <- compile t2; return (Apply t1' t2 23:22:34 ...gah, left off the last two characters? 23:22:37 compile (Apply t1 t2) = do t1' <- compile t1; t2' <- compile t2; return (Apply t1' t2') 23:22:57 compile (Lambda l t1) = do t1' <- compile t1; return (Apply K t1') 23:22:58 i think you are just reinventing hsakell 23:23:01 actually, you want an intermediate format that includes Vars. 23:23:13 +SKI 23:23:26 ehird`: writing something in Haskell is reinventing Haskell? 23:23:32 ...oh 23:23:40 i thought you were still going on about your language :P 23:24:18 abstraction elimination is just simple recursion if you have vars on both sides. 23:24:42 I may be able to come up with a clever way of doing this. 23:25:58 compile (Var l) should somehow look for the corresponding compile (Lambda l t1) and... do something with it. 23:26:19 i'd like to do D but i can't install the compiler 23:26:26 these computers are so hard to use :\ 23:26:32 you _don't_ want to consider more than one variable at one time. Trust me. 23:27:10 Can you prove there's no really clever way of doing this? :-P 23:27:19 oklobot: Wanna help with an NES emulator? 23:27:31 of course not. But having a common data structure makes it so much simpler. 23:29:04 Sukoshi: you mean oerjan or me? 23:29:13 i wanna help, oerjan can help. 23:29:30 i haven't done D but it looks awesome 23:29:34 among other things, you want to give the result of translating a sublambda _back_ into the simplification of the outer ones 23:29:44 Hmm. Somehow, my mind read that as oerjan: you mean ihope or me? 23:29:45 someone install me the compiler and tell me how to use it :) 23:30:17 That makes sense as long as Sukoshi said "oerjan: Wanna help with an NES emulator?" 23:30:38 Well, I don't have much to lose by trying to come up with a clever way of doing this. 23:30:41 err... you sure it would make sense then? 23:30:50 ah 23:30:54 it would 23:30:57 ssh 23:31:00 which means that needs to be in the intersection of the before and after formats 23:32:03 now if you want to be _clever_, come up with an algorithm which doesn't grow exponentially as you nest lambdas. 23:35:07 Hmm, I think cleverness is coming vaguely... 23:35:10 is that possible? 23:35:29 yes, although the initial overhead is greater. 23:36:09 you can pass a list of variables to look up in 23:36:38 it resembles deBruijn notation... 23:37:17 Hmm, contexts... 23:38:06 i am sure you could even do binary lookup somehow. 23:38:29 * pikhq is home. . . :D 23:38:37 (logarithmic growth but horrible overhead, i guess) 23:38:48 you can always do the naive algorithm but then reduce afterwards 23:39:32 might be easier to choose while you still have lambdas to analyze 23:39:47 Hmm, a monadic hole... 23:40:00 Yes, you, oklofok. 23:40:21 I've found a Microsoft way to fix this error. 23:40:47 ...a monadic version of LCTerm that can have holes in it? 23:41:07 /* For some reason, the Hashtable contains an extra null element that is useless. When returning number of entries, decrease Hashtable entries by 1 */ 23:41:36 ;P 23:41:51 Hey, it works. 23:41:54 ihope: zippers! 23:42:07 clothepins? 23:42:11 Zippers are what I'm reminded of, yes... 23:42:37 although zippers with several holes are far more complicated 23:42:41 Sukoshi: Call it a null-terminated Hashtable. :p 23:42:50 ;D 23:42:56 But so far, I don't think this actually has anything to do with zippers. 23:43:12 i think Oleg (TM) has done a tiny bit on it. 23:43:14 But because I want to deliver this code, I think I will do exactly that and do some more heuristics later. 23:43:43 Sukoshi: i do want to help. 23:43:54 oklobot: Yay. 23:44:00 oklobot: How much ASM do you know? 23:44:00 NES emulator? that gamie thing 23:44:04 nintendo 23:44:11 Cool, we're butchering trademarks... 23:44:12 i know a lot of theory. 23:44:26 i haven't written a line of assembly since i never got a compiler set up :) 23:44:31 and vincenz in #haskell was doing something the other day 23:44:38 If I wanted theory, I'd use Haskell, not ASM :D 23:44:48 i know a lot of theory about asm 23:44:56 Why are you doing stuff in ASM? 23:44:58 Phaw. Be an engineer. Just Do It. 23:45:02 hehe 23:45:12 pikhq: Because this is practice for a GBA emulator I plan to fork from VBA. 23:45:14 pikhq: asm is love 23:45:17 err 23:45:19 and that. 23:45:19 Because the Linux VBA is bleh. 23:45:31 Ah. 23:45:37 Good reason. 23:45:48 * pikhq is an ASM amateur 23:46:03 If you have a brain, and can imagine stacks and registers... it shouldn't be too hard. 23:46:17 i've read a few books about asm, and an inter processor manual or something half-way through 23:46:28 Grr. Do more practice :P 23:46:34 hehe :) 23:46:47 really, i just didn't get tasm and masm to work 23:47:12 installing programs is reeeeal hard 23:47:16 (i'll retry now) 23:47:39 i have nasm and masm on my hd 23:47:41 it seems 23:47:56 Figs i think did some assembly... or who was it 23:48:04 perhaps him 23:48:15 Uhh... 23:48:18 NASM we use. 23:49:12 hmm 23:49:13 actually 23:49:15 What makes me happy is that what I'm trying to do would probably be entirely non-obvious without monads :-) 23:49:25 i recall making a program play random sounds with the pc beeper 23:49:29 but i didn't know asm then 23:49:31 i was like 12 23:49:39 (with asm that is) 23:50:02 that's all i ever made with it 23:50:13 Then grab a good tutorial around, and play with it. 23:50:33 good idea 23:50:35 i'll do that now 23:52:09 uh you gotta love assembly 23:52:25 grab a tutorial, try the hello world program, get 7 errors <3 23:54:06 ihope: with monads, you can make it entirely incomprehensible! :D 23:54:12 Sukoshi: isn't making a NES emulator rather huge a challenge? 23:54:26 oerjan: :-P 23:54:34 though i agree those are the best ones 23:55:07 Indeed, Haskell is probably capable of writing extremely short stuff that doesn't make any sense at all until you've thought it over a few days. 23:56:10 i love it how i can just skip @ anywhere in a tutorial, see immediately what's happening and rememeber reading about how that's done (the basic bit and jmp fun i mean), but i have absolutely no idea how to make a "Hello world" program 23:56:26 *rememeber 23:56:28 *rememeber 23:56:30 ... 23:56:37 rememeber, yes. 23:58:34 what you say three times is true