On Tue, Jan 20, 2015 at 2:59 PM, Jorge Llambías <jjllambias@gmail.com> wrote:Would it be fair to say that what an actual grammar should do is, given some input of sound or written characters, tell us how to:(1) convert the input into a string of phonemes(2) convert the string of phonemes into a string of words(3) determine a tree structure for the string of words(4) determine which nodes of the tree are terms, which nodes are predicates, which terms are co-referring, and which terms are arguments of which predicatesRather:(1') convert the input into a string [or perhaps tree] of phonemes(2') convert the string [or perhaps tree] of phonemes into a string [or perhaps (prosodic) tree] of phonological words(3') map the tree of phonological words to a structure of syntactic 'words'/'nodes', which structure will specify which nodes of the tree are terms, which nodes are predicates, which terms are co-referring, and which terms are arguments of which predicates
> If that's more or less on track, then we can say that the YACC/EBNF formal grammars do (3). The PEG grammar does (2) and (3). Martin's tersmu is trying to do (4). I would agree that the way our formal grammars do (3) is probably not much like the way our brains do (3), but I'm not sure I see what alternative we have.Right. So I think (3) is not a valid step.
(3') should be doable, partly from Tersmu and partly by using some natural language formalism to analyse the syntax (e.g. at minimum make all phrases headed and forbid unary branching; binary branching would be a bonus if it could be managed).