# Talk:Expload

## Hmmmm

This didn't turn out as well as I'd hoped. It has its roots in an older idea of mine (provisionally called GI) to use undirected, unlabelled graphs as the instructions, and then to compare the undirected graphs in the programs to the ones defined as instructions. That would be in the complexity class **GI**, which is not known to equal either **P** or **NP**. Here, I wanted to have something that was definitely outside **P**, and there is of course the possibility that tomorrow, it will be shown that **GI**, or maybe even **NP**, is just **P** anyway. But GI is also computationally unavoidable in a way Expload isn't. (In Expload, you can always make turmoids which are trivial and which can be resolved easily.) But, implementing the instructions to assemble undirected, unlabelled graphs would've been more work. (I encourage anyone who wants to try designing GI to do so, because it could also be fun. But if you do, please give it a better name than GI)

But there's maybe a more difficult issue in that the language doesn't force you to write "inherently slow code", and if it did, it would probably have to be in some contrived way (you cannot use the same turmoid more than 5 times? the *b* in every turmoid must be at least twice as large as the *b* in the previous?)... and to say that it can't guarantee that instructions are executed efficiently, well, that's true for any language: just write your program in the manner of: write a Tag system interpreter then feed it symbols which encode a NTM which reduces lambda calculus forms which you feed it which... etc. Chris Pressey (talk) 01:04, 6 December 2012 (UTC)

- This isn't a particularly interesting design space (though it
*is*quite stinky) because it's obvious that while one can force the programmer to write more and more annoying and obtuse variations on testing if a Turing machine halts after*n*steps, it'd be very very difficult to force them to apply these tests to arbitrarily complex TMs without foisting essentially random TMs on them, say by enumerating TMs and saying "ok, here's the next one." Programming in*that*would be, well, quite difficult. Still, I have hopes that the House of Expload will one day produce a scion who will go far in life... Chris Pressey (talk) 14:19, 7 December 2012 (UTC)

- Reading over this again years later, a few things occur to me.
- One is that the article could use some explanations for why simulating a Turing machine for
*k*steps is EXPTIME-complete. Because offhand, it looks like, don't you just have to run the TM for*k*steps? Surely that's a polynomial-time simulation? Well, yes it is, but if*k*is written in binary, it's only got log*k*bits, and it's exponential in the number of those bits. As the Wikipedia article on EXPTIME says, the problem is P-complete if*k*is written in unary. (Which strikes me as just weird, that choosing binary notation over unary notation "shifts down" an entire complexity class, but whatever. It's hardly the wierdest thing in complexity theory, I guess.) - The other is, it's not entirely clear what problem Son of Expload and Old Army Buddy of Expload were trying to solve (and I don't remember), but if the goal is to make an "inherently slow language", it might work to add a constraint like: if a turmoid reduces in
*s*steps where*s*is equal to or fewer than half the number of steps it has taken to previously reduce any turmoid, the Expload program simply halts. In other words, turmoids must become longer and longer. And this seems simpler than either of those variants. But is it inherently slow? That's always been the hard part. GI was intended to be, and Expload (as mentioned above) didn't achieve that. To show that it's inherently slow, you have to show there is no general way to take such a program and transform it, in less than exponential time, in such a way that you get a result which is always the same as running it as an Expload program would be. Because that'd effectively be running it as an Expload program. --Chris Pressey (talk) 11:38, 14 June 2018 (UTC)