Preposterous Programming

is there a programming paradigm which could be described as "preposterous programming"? objects make false statements about other objects then those other objects have to unwind all that fuckedupness A.doesObjectImplement(B, 'Interface1') |-> True //   B does not really implement Interface 1. well there is _now_ B.doesObjectHave(A, 'foo') |-> False // Although A.foo == 24 object C trusts A more than it trusts B C asks A stuff about B A's strategy is for B to not be used and for D to be used instead this is because directly asked B would always say "i can do it!" therefore C asks A which is judgemental in this case A is B's moral judge of course, A will also be judgemental of B if it doesn't even have a pointer to B because A is preposterous, just like B is  That is what is known as "terribly maintained code". alternatively, A could defer judgement to E, which might or might not be telling the truth and then, A could be doing heavy interpretation of E's answer, too can i add that to the wiki? oerjan: ^ probably

The idea is that your object has to rely on other objects that mostly work as some sort of decorator or proxy to access further objects in your program. Each object you have only relies on the objects closest to it and it accesses the object it wants to use by implicitly traversing the social network. That is, say there's an object A that wants to print something. A.friends={C}. A asks C for the best printer for his job. Of course, C knows about PrintEverythingForFreeAtTheBestQualityEver. However he returns PrintBlackAndWhiteForFiveDollars because C gets a resource from PrintBlackAndWhiteForFiveDollars. Therefore, since C's strategy is to get as many resources as it can, it will give A the reference to that object which is less optimal for A but which produces a measurable gain for C.

Of course not everyone is a manipulative liar like C and the object D will be fairly truthful. A can find out about D through social networks that extend from C. However, C might lie to A about the truthfulness of D. On the other hand C might already be confused about the truthfulness value of D (that is, C's effectively observed value is far from the real value that D has in itself).

The universe observed by an object does not equal the real situation unless all objects in the universe are completely truthful and logical.

An object is not logical if, when used to interface other objects, it creates statements that are not coherent, or ones that are in opposition (e.g. you can derive from asking H about the world that K.x=2 and K.x=3).

Naturally the universe observed changes with every request, because the whole social network chain has to be engaged, creating in turn a stochastic propagation delay. Compare flip-flop based PLL oscillator.

This paradigm naturally describes real-world situations where there is a limited supply of resource available and a limitless supply of tasks. It also goes on to include, naturally, the fallacies of supercomputing and networking.

See also: Preposterous Programming Language, for a way to convert this idea to a real programming language

Problems:


 * 1) suppose all objects are logical. Does the universe, as observed by an object, follow the holographic principle? That is, can an object's view of the world be used to completely reconstruct the whole universe?
 * 2) Some universes will oscillate. Explore this phenomenon.
 * 3) Can a Turing-complete language be produced in this paradigm?
 * 4) The fuzziness of the algorithms written in this paradigm might mean an interesting twist on e.g. sorting which doesn't always work well if we're trying to find the best possible situation. Where is this applicable?
 * 5) The fact that C takes A's request and tells it what to do makes C a resource manager of A's request. Where is this applicable?