ko'a ru'a satci bangu .i ko'e ru'a traci bangu .i ro da poi jufra bau ko'a
zo'u go da selsmuni gi le xe fanva be da bei ko'e be'o selsmuni i ku'i ma bangu ka satci i ma poi xe fanva da ko'e cu se cipra fi leka selsmuni. Enough! (to show my lojbanic incompetence as well as to get started on the point). xod's proposal looks like only the Logically Perfect Language part of the usual testafiability criterion and so may avoid the problem of self-application (though maybe not). So we have s sentence in English, say, that we suspect of being meaningless. So we translate it into LPL (NOT Lojban, I hope) and check. But how do we translate it into LPL? Maybe an algorithm that always gives a unique and accurate traslation? We know that is impossible for moving from a natural language to LPL, since a natural language sentence is almost always ambiguous and fuzzy. Okay, then maybe an algorithm that, applied to a given situation, gives a unique and accurate translation of the sentence as applied to that situation? Well, maybe indeed, and we can guarantee uniqueness then. But what about accuracy? Well, a translation is accurate, we may suppose without too much cavilage, just in case it means the same thing as the original. But what if the original has no meaning? Will it be possible to have meaningless sentences in LPL and, if so, how is it perfect? Typically, this question gives rise to one or the other of: well there is one meaningless _expression_ in LPL into which all meaningless sentences translate and which, of course plays no further role in the language. Or meaningless sentences don't translate into LPL at all and so the right side in the equation is false for containing a nondenoting _expression_ {le xe fanva be da bei ko'a be'o} (or we can fiddle the equation in some insignificant way, using bound variables rather than descriptions). But all this assumes, to be effective, that we have agreed that the algorithm is correct (this gets worse, of course, without the algorithm but relying on mere skill in translating). And, as soon as one's favorite claim is shown to be meaningless on this test, one withdraws one's agreement, pointing to the result just mentioned as evidence (adequate evidence, yet) that the algorithm is not correct, since it gives here an inaccurate result. So we try another algorithm that works in this case and the cycle repeats itself. But we might have objective tests for accuracy, at least the extensional one that the original and the translation are true in exactly the same cases. Way too weak, since it allows all true sentences of mathematics, say, have the same translation and similarly all the "laws of nature." But more importantly in a practical sense, the person whose sentence was declared meaningless presumably thinks that it is sometimes true and so will know that no translation which makes it meaningless (and thus never true) can be right. We could move on to saying that accuracy means being the same in every possible situation, but -- quite aside from xod's determination not to allow such things (as I understand) -- this notion has little explanatory power (ya gets out what ya puts in), so the problem of rejecting accuracy remains unresolved. And that doesn't even touch the issue of an exact (or LP) langauge. |