[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: [lojban] LALR1 question
Jay:
> On Mon, 27 Aug 2001, Invent Yourself wrote:
[...]
> > How hard would it be to create an LALR(5) Lojban, and how different would
> > it be to speak?
[...]
> The answer for your second question is significantly more difficult.
>
> In fact, I don't suggest you read it. I'm answering mostly so as to ponder
> the question outloud for myself. I suggest asking a psycholinguist, or at
> least a linguist.
Better a psycholinguist, because I'm a nonpsycho linguist & am not a
fount of knowledge. But let me muster what little I know. (This meagre
knowledge is 15 years old.)
Human parsing is done of course in real time and with minimal lookahead.
The parser can backtrack, but normally doesn't (-- the gardenpath effect
is a forced backtrack). Processing of all levels is done in parallel,
with phonology slightly ahead, so phonology activates words in the lexicon
whose sound matches the phonological string, but the lexical word is
identified on the basis of the syntax and meaning of the sentence up
to that point. Syntactic and semantic structure are built up simultaneously
and pragmatic interpretation also begins at the start of the sentence
and proceeds in step with the parse. Local grammatical ambiguities are
resolved by taking into account pragmatics (guessing what it was the speaker
intended to say, and also guessing what the speaker is about to say next),
and by default preferences for building syntactically simpler rather than
more complex structures (complexity here being not an abstract notion of
complexity but instead something psychologically concrete like the
demands placed on short-term memory). So strings that are grammatically
ambiguous in principle (as almost all are, in fact, in English at least)
are only very rarely parsed in practise as globally ambiguous.
The essence of this system, then, in comparison to possible ways that
computers might do it, is that human parsing is done incrementally left
to right with minimal lookahead and minimal backtracking with all local
ambiguities (as well as word-identification) resolved 'on the spot' on
the basis of all possible evidence available, both grammatical and
pragmatic.
> My guess is, that if the language actually made significant use of having
> 5 tokens of lookahead, that speaking it and understanding it would be
> beyond many humans. Supposedly humans have got a short term memory of 7
> give or take 2 items. LALR(5) would require that humans remember the last
> 5 words said, in addition to the current one. Sort of...
>
> Memory and language processing is likely very different from the way a
> parser works. But I'd imagine that if people had to hold 6 words or so in
> memory, just to be able to identify a word as a determinor, or a group of
> words as a noun phrase, they'd have real problems.
For sure.
> If you were merely upping the amount of look ahead so that you could leave
> out terminators in a lot more cases, then it would probably make things
> easier to remember. (Humans can stick missing terminators back in rather
> handily, in many cases.)
By "stick missing terminators back in" I presume you mean "backtrack so as
to attach a phrase to a node higher than the one it was attached to before",
or "postpone resolving an attachment ambiguity until more incoming words
have been processed and there is sufficient information to resolve the
ambiguity"? My recollection is that the norm is the postponement strategy,
with the backtracking a last resort emergency strategy for moments of
crisis.
> (The last 3 paragraphs were pulled almost entirely out of my ass. Should
> they happen to be correct, then I'll be impressed. See, however, the note
> about asking a linguist.)
My remarks too are rather exculate, linguist's though the ass be.
--And.