Received: from mail-yw0-f61.google.com ([209.85.213.61]) by chain.digitalkingdom.org with esmtp (Exim 4.72) (envelope-from ) id 1PEVer-0004P3-ID; Fri, 05 Nov 2010 16:25:34 -0700 Received: by ywj3 with SMTP id 3sf3953973ywj.16 for ; Fri, 05 Nov 2010 16:25:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlegroups.com; s=beta; h=domainkey-signature:received:mime-version:x-beenthere:received :received:received:received:received:received-spf:received:received :received:date:message-id:to:subject:from:x-spam_score :x-spam_score_int:x-spam_bar:x-spam_report:x-original-sender :x-original-authentication-results:reply-to:precedence:mailing-list :list-id:list-post:list-help:list-archive:sender:list-subscribe :list-unsubscribe:content-type:content-transfer-encoding; bh=XqlYxr9wpiw1VeffR20i5INWcHcB4a/UdlK9npj/ajY=; b=WCmGLCb7grWicCtArm7HtkMaoMtZkWymy9Cib7dPOoCWHUAiKxrCNnv4eOWB1i+shX VzCcuqipLbXh8u6G0yDw3EiWX/XkjQThWxdMevm7XYxC70zyjH2L5ytB2tL4eOofZmfc 3K2OyKbD1BDqKXDBL5mPwXyYxvBPgdMLy8GQE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=googlegroups.com; s=beta; h=mime-version:x-beenthere:received-spf:date:message-id:to:subject :from:x-spam_score:x-spam_score_int:x-spam_bar:x-spam_report :x-original-sender:x-original-authentication-results:reply-to :precedence:mailing-list:list-id:list-post:list-help:list-archive :sender:list-subscribe:list-unsubscribe:content-type :content-transfer-encoding; b=IKnITl7rxRfzgI5RoxaIIN4wS3MkyFqCvwbUCfthJ0J46T6FYcliXart+YQdt2dnD+ LgqypqdAiVgJ4f9Zyr4UxSvSKrhEuVlEGnCwgBsNf5IRcDhIy0L3uGFPficqOzWMvOvM XZ5WHLZwyyiCOdfwil7KkyecU0Pdd8xqXSedo= Received: by 10.151.62.41 with SMTP id p41mr307327ybk.56.1288999509786; Fri, 05 Nov 2010 16:25:09 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: bpfk-list@googlegroups.com Received: by 10.150.102.24 with SMTP id z24ls2057958ybb.3.p; Fri, 05 Nov 2010 16:25:08 -0700 (PDT) Received: by 10.150.202.6 with SMTP id z6mr833624ybf.17.1288999508404; Fri, 05 Nov 2010 16:25:08 -0700 (PDT) Received: by 10.142.217.17 with SMTP id p17mr1826863wfg.4.1288999273340; Fri, 05 Nov 2010 16:21:13 -0700 (PDT) Received: by 10.142.217.17 with SMTP id p17mr1826860wfg.4.1288999273264; Fri, 05 Nov 2010 16:21:13 -0700 (PDT) Received: from chain.digitalkingdom.org (digitalkingdom.org [173.13.139.234]) by gmr-mx.google.com with ESMTP id y8si3657903wfj.5.2010.11.05.16.21.13; Fri, 05 Nov 2010 16:21:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of nobody@digitalkingdom.org designates 173.13.139.234 as permitted sender) client-ip=173.13.139.234; Received: from nobody by chain.digitalkingdom.org with local (Exim 4.72) (envelope-from ) id 1PEVaq-0005P8-Hd for bpfk-list@googlegroups.com; Fri, 05 Nov 2010 16:21:12 -0700 Received: from 128-177-28-49.ip.openhosting.com ([128.177.28.49] helo=oh-www1.lojban.org) by chain.digitalkingdom.org with esmtp (Exim 4.72) (envelope-from ) id 1PEVag-0005Od-El for bpfk@lojban.org; Fri, 05 Nov 2010 16:21:12 -0700 Received: from www-data by oh-www1.lojban.org with local (Exim 4.71) (envelope-from ) id 1PEVaf-00081Q-P3 for bpfk@lojban.org; Fri, 05 Nov 2010 19:21:01 -0400 Date: Fri, 05 Nov 2010 19:21:01 -0400 Message-Id: To: bpfk@lojban.org Subject: [bpfk] dag-cll git updates for Fri Nov 5 19:21:01 EDT 2010 From: www-data X-Spam_score: 1.7 X-Spam_score_int: 17 X-Spam_bar: + X-Spam_report: Spam detection software, running on the system "chain.digitalkingdom.org", has identified this incoming email as possible spam. The original message has been attached to this so you can view it (if it isn't spam) or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: commit 3d25682335e0ef1c4821abd0a535088bc9b1b3b5 Author: Robin Lee Powell Date: Fri Nov 5 16:17:08 2010 -0700 Conformance with the printed book. Had to add the entire YACC cross-ref back. That really shouldn't be manual, but that's for another day. [...] Content analysis details: (1.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 0.7 TVD_RCVD_IP TVD_RCVD_IP -0.0 BAYES_20 BODY: Bayes spam probability is 5 to 20% [score: 0.1378] 1.0 RDNS_DYNAMIC Delivered to internal network by host with dynamic-looking rDNS X-Original-Sender: www-data@oh-www1.lojban.org X-Original-Authentication-Results: gmr-mx.google.com; spf=pass (google.com: best guess record for domain of nobody@digitalkingdom.org designates 173.13.139.234 as permitted sender) smtp.mail=nobody@digitalkingdom.org Reply-To: bpfk-list@googlegroups.com Precedence: list Mailing-list: list bpfk-list@googlegroups.com; contact bpfk-list+owners@googlegroups.com List-ID: List-Post: , List-Help: , List-Archive: Sender: bpfk-list@googlegroups.com List-Subscribe: , List-Unsubscribe: , Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Content-Length: 67151 commit 3d25682335e0ef1c4821abd0a535088bc9b1b3b5 Author: Robin Lee Powell Date: Fri Nov 5 16:17:08 2010 -0700 Conformance with the printed book. Had to add the entire YACC cross-ref back. That really shouldn't be manual, but that's for another day. diff --git a/21/1/index.html b/21/1/index.html index 64d5c08..dfa88c4 100644 --- a/21/1/index.html +++ b/21/1/index.html @@ -19,62 +19,55 @@
EBNF Grammar of Lojban

Chapter 21
Formal Grammars

+

The following two listings constitute the formal grammar of Lojban. The= first version is written in the YACC language, which is used to describe p= arsers, and has been used to create a parser for Lojban texts. This parser = is available from the Logical Language Group. The second listing is in Exte= nded Backus-Naur Form (EBNF) and represents the same grammar in a more huma= n-readable form. (In case of discrepancies, the YACC version is official.) = There is a cross-reference listing for each format that shows, for each sel= ma'o and rule, which rules refer to it.

1. YACC Grammar of Lojban

-
/* LOJBAN MACHINE GRAMMAR, 3RD BASELINE AS OF 10 JANUARY= 1997
-WHICH IS ORIGINAL BASELINE 20 JULY 1990 INCORPORATING JC=92S TECH FIXES 1-= 28
-THIS DRAFT ALSO INCORPORATES CHANGE PROPOSALS 1-47 DATED 29 DECEMBER 1996<= br /> -
-THIS DOCUMENT IS EXPLICITLY DEDICATED TO THE PUBLIC DOMAIN
-BY ITS AUTHOR, THE LOGICAL LANGUAGE GROUP INC.
-CONTACT THAT ORGANIZATION AT 2904 BEAU LANE, FAIRFAX VA 22031 USA
-U.S. PHONE: 703-385-0273
-INTL PHONE: +1 703 385-0273
+
/* /*Lojban Machine Grammar, Final Baseline The Lojban M= achine Grammardocument is explicitly dedicated to the public domain by its = author,The Logical Language Group, Inc.


grammar.300 */

/* The Lojban machine parsing algorithm is a multi-step process. The YA= CC machine grammar presented here is an amalgam of those steps, concatenate= d so as to allow YACC to verify the syntactic ambiguity of the grammar. YAC= C is used to generate a parser for a portion of the grammar, which is LALR1= (the type of grammar that YACC is designed to identify and process success= fully), but most of the rest of the grammar must be parsed using some langu= age-coded processing.

Step 1 - Lexing

-

From phonemes, stress, and pause, it is possible to resolve Lojban unam= biguously into a stream of words. Any machine processing of speech will hav= e to have some way to deal with =92non-Lojban=92 failures of fluent speech,= of course. The resolved words can be expressed as a text file, using Lojba= n=92s phonetic spelling rules.

-

The following steps, assume that there is the possibility of non-Lojban= text within the Lojban text (delimited appropriately). Such non-Lojban tex= t may not be reducible from speech phonetically. However, step 2 allows the= filtering of a phonetically transcribed text stream, to recognize such por= tions of non-Lojban text where properly delimited, without interference wit= h the parsing algorithm.

+

From phonemes, stress, and pause, it is possible to resolve Lojban unam= biguously into a stream of words. Any machine processing of speech will hav= e to have some way to deal with =92non-Lojban=92 failures of fluent speech,= of course. The resolved words can be expressed as a text file using Lojban= =92s phonetic spelling rules.

+

The following steps assume that there is the possibility of non-Lojban = text within the Lojban text (delimited appropriately). Such non-Lojban text= may not be reducible from speech phonetically. However, step 2 allows the = filtering of a phonetically transcribed text stream, to recognize such port= ions of non-Lojban text where properly delimited, without interference with= the parsing algorithm.

Step 2 - Filtering

From start to end, performing the following filtering and lexing tasks = using the given order of precedence in case of conflict:

-

a. If the Lojban word =93zoi=94 (selma'o ZOI) is identified, take the f= ollowing Lojban word (which should be end delimited with a pause for separa= tion from the following non-Lojban text) as an opening delimiter. Treat all= text following that delimiter, until that delimiter recurs *after a pause*= , as grammatically a single token (labelled =92anything_6= 99=92 in this grammar). There is no need for processing within this tex= t except as necessary to find the closing delimiter.

+

a. If the Lojban word =93zoi=94 (selma'o ZOI) is identified, take the f= ollowing Lojban word (which should be end delimited with a pause for separa= tion from the following non-Lojban text) as an opening delimiter. Treat all= text following that delimiter, until that delimiter recurs after a pau= se, as grammatically a single token (labelled =92any= thing_699=92 in this grammar). There is no need for processing within t= his text except as necessary to find the closing delimiter.

b. If the Lojban word =93zo=94 (selma'o ZO) is identified, treat the fo= llowing Lojban word as a token labelled =92any_word_698=92, instead of lexing it by its normal grammatical function.

c. If the Lojban word =93lo'u=94 (selma'o LOhU) is identified, search f= or the closing delimiter =93le'u=94 (selma'o LEhU), ignoring any such closi= ng delimiters absorbed by the previous two steps. The text between the deli= miters should be treated as the single token =92any_words= _697=92.

d. Categorize all remaining words into their Lojban selma'o category, i= ncluding the various delimiters mentioned in the previous steps. In all ste= ps after step 2, only the selma'o token type is significant for each word.<= /p>

e. If the word =93si=94 (selma'o SI) is identified, erase it and the pr= evious word (or token, if the previous text has been condensed into a singl= e token by one of the above rules).

f. If the word =93sa=94 (selma'o SA) is identified, erase it and all pr= eceding text as far back as necessary to make what follows attach to what p= recedes. (This rule is hard to formalize and may receive further definition= later.)

-

g. If the word =92su=92 (selma'o SU) is identified, erase it and all pr= eceding text back to and including the first preceding token word which is = in one of the selma'o: NIhO, LU, TUhE, and TO. However, if speaker identifi= cation is available, a SU shall only erase to the beginning of a speaker=92= s discourse, unless it occurs at the beginning of a speaker=92s discourse. = (Thus, if the speaker has said something, two =93su=92=94s are required to = erase the entire conversation.

+

g. If the word =92su=92 (selma'o SU) is identified, erase it and all pr= eceding text back to and including the first preceding token word which is = in one of the selma'o: NIhO, LU, TUhE, and TO. However, if speaker identifi= cation is available, a SU shall only erase to the beginning of a speaker=92= s discourse, unless it occurs at the beginning of a speaker=92s discourse. = (Thus, if the speaker has said something, two adjacent uses of =93su=94 are= required to erase the entire conversation.

Step 3 - Termination

If the text contains a FAhO, treat that as the end-of-text and ignore e= verything that follows it.

Step 4 - Absorption of Grammar-Free Tokens

In a new pass, perform the following absorptions (absorption means that= the token is removed from the grammar for processing in following steps, a= nd optionally reinserted, grouped with the absorbing token after parsing is= completed).

a. Token sequences of the form any - (ZEI - any) ..., where there may b= e any number of ZEIs, are merged into a single token of selma'o BRIVLA.

b. Absorb all selma'o BAhE tokens into the following token. If they occ= ur at the end of text, leave them alone (they are errors).

c. Absorb all selma'o BU tokens into the previous token. Relabel the pr= evious token as selma'o BY.

d. If selma'o NAI occurs immediately following any of tokens UI or CAI,= absorb the NAI into the previous token.

e. Absorb all members of selma'o DAhO, FUhO, FUhE, UI, Y, and CAI into = the previous token. All of these null grammar tokens are permitted followin= g any word of the grammar, without interfering with that word=92s grammatic= al function, or causing any effect on the grammatical interpretation of any= other token in the text. Indicators at the beginning of text are explicitl= y handled by the grammar.

Step 5 - Insertion of Lexer Lexemes

Lojban is not in itself LALR1. There are words whose grammatical functi= on is determined by following tokens. As a result, parsing of the YACC gram= mar must take place in two steps. In the first step, certain strings of tok= ens with defined grammars are identified, and either

a. are replaced by a single specified =92lexer token=92 for step 6, or<= /p>

b. the lexer token is inserted in front of the token string to identify= it uniquely.

-

The YACC grammar included herein is written to make YACC generation of = a step 6 parser easy regardless of whether a. or b. is used. The strings of= tokens to be labelled with lexer tokens are found in rule terminals labell= ed with numbers between 900 and 1099. These rules are defined with the lexe= r tokens inserted, with the result that it can be verified that the languag= e is LALR1 under option b. after steps 1 through 4 have been performed. Alt= ernatively, if option a. is to be used, these rules are commented out, and = the rule terminals labelled from 800 to 900 refer to the lexer tokens *with= out* the strings of defining tokens. Two sets of lexer tokens are defined i= n the token set so as to be compatible with either option.

-

In this step, the strings must be labelled with the appropriate lexer t= okens. Order of inserting lexer tokens *IS* significant, since some shorter= strings that would be marked with a lexer token may be found inside longer= strings. If the tokens are inserted before or in place of the shorter stri= ngs, the longer strings cannot be identified.

-

If option a. is chosen, the following order of insertion works correctl= y (it is not the only possible order): A, C, D, B, U, E, H, I, J, K, M ,N, = G, O, V, W, F, P, R, T, S, Y, L, Q. This ensures that the longest rules wil= l be processed first; a PA+MAI will not be seen as a PA with a dangling MAI= at the end, for example.

+

The YACC grammar included herein is written to make YACC generation of = a step 6 parser easy regardless of whether a. or b. is used. The strings of= tokens to be labelled with lexer tokens are found in rule terminals labell= ed with numbers between 900 and 1099. These rules are defined with the lexe= r tokens inserted, with the result that it can be verified that the languag= e is LALR1 under option b. after steps 1 through 4 have been performed. Alt= ernatively, if option a. is to be used, these rules are commented out, and = the rule terminals labelled from 800 to 900 refer to the lexer tokens w= ithout the strings of defining tokens. Two sets of lexer tokens are de= fined in the token set so as to be compatible with either option.

+

In this step, the strings must be labelled with the appropriate lexer t= okens. Order of inserting lexer tokens IS significant, since some = shorter strings that would be marked with a lexer token may be found inside= longer strings. If the tokens are inserted before or in place of the short= er strings, the longer strings cannot be identified.

+

If option a. is chosen, the following order of insertion works correctl= y (it is not the only possible order): A, C, D, B, U, E, H, I, J, K, M, N, = G, O, V, W, F, P, R, T, S, Y, L, Q. This ensures that the longest rules wil= l be processed first; a PA+MAI will not be seen as a PA with a dangling MAI= at the end, for example.

Step 6 - YACC Parsing

YACC should now be able to parse the Lojban text in accordance with the= rule terminals labelled from 1 to 899 under option 5a, or 1 to 1099 under = option 5b. Comment out the rules beyond 900 if option 5a is used, and comme= nt out the 700-series of lexer-tokens, while restoring the series of lexer = tokens numbered from 900 up.

*/

 %token A_501            /* eks; basic aft=
erthought logical connectives */
 %token BAI_502          /* modal operator=
s */
 %token BAhE_503         /* next word inte=
nsifier */
 %token BE_504           /* sumti link to =
attach sumti to a selbri */
 %token BEI_505          /* multiple sumti=
 separator between BE, BEI */
 %token BEhO_506         /* terminates BE/=
BEI specified descriptors */
@@ -234,45 +227,45 @@ the 900 series rules are found in the lexer.  */
 %token lexer_R_718      /* flags a GIhEK,=
 not BO or KE */
 %token lexer_S_719      /* flags simple I=
 */
 %token lexer_T_720      /* flags I_JEK */
 %token lexer_U_721      /* flags a JEK_BO=
 */
 %token lexer_V_722      /* flags a JOIK_B=
O */
 %token lexer_W_723      /* flags a JOIK_K=
E */
 /* %token lexer_X_724   /* null */
 %token lexer_Y_725      /* flags a PA_MOI=
 */
=20
=20
-/*%token lexer_A_905    /* :  lexer_A_701  utt_ordinal_root_906 */
-/*%token lexer_B_910    /* :  lexer_B_702  EK_root_911 */
-/*%token lexer_C_915    /* :  lexer_C_703  EK_root_911  BO_508 */
-/*%token lexer_D_916    /* :  lexer_D_704  EK_root_911  KE_551 */
-/*%token lexer_E_925    /* :  lexer_E_705  JEK_root_926 */
-/*%token lexer_F_930    /* :  lexer_F_706  JOIK_root_931 */
-/*%token lexer_G_935    /* :  lexer_G_707  GA_537 */
-/*%token lexer_H_940    /* :  lexer_H_708  GUhA_544 */
-/*%token lexer_I_945    /* :  lexer_I_709  NAhE_583  BO_508 */
-/*%token lexer_J_950    /* :  lexer_J_710  NA_578  KU_556 */
-/*%token lexer_K_955    /* :  lexer_K_711  I_432  BO_508 */
-/*%token lexer_L_960    /* :  lexer_L_712  number_root_961 */
-/*%token lexer_M_965    /* :  lexer_M_713  GIhEK_root_991  BO_508 */
-/*%token lexer_N_966    /* :  lexer_N_714  GIhEK_root_991  KE_551 */
-/*%token lexer_O_970    /* :  lexer_O_715  simple_tense_modal_972 */
-/*%token lexer_P_980    /* :  lexer_P_716  GIK_root_981 */
-/*%token lexer_Q_985    /* :  lexer_Q_717  lerfu_string_root_986 */
-/*%token lexer_R_990    /* :  lexer_R_718  GIhEK_root_991 */
-/*%token lexer_S_995    /* :  lexer_S_719  I_545 */
-/*%token  lexer_T_1000  /* :  lexer_T_720  I_545  simple_JOIK_JEK_957 */
-/*%token lexer_U_1005   /* :  lexer_U_721  JEK_root_926  BO_508 */
-/*%token lexer_V_1010   /* :  lexer_V_722  JOIK_root_931  BO_508 */
-/*%token lexer_W_1015   /* :  lexer_W_723  JOIK_root_931  KE_551 */
-/*%token lexer_X_1020   /* null */
-/*%token lexer_Y_1025   /* :  lexer_Y_725  number_root_961  MOI_663 */
+/* %token lexer_A_905    /* :  lexer_A_701  utt_ordinal_root_906 */
+/* %token lexer_B_910    /* :  lexer_B_702  EK_root_911 */
+/* %token lexer_C_915    /* :  lexer_C_703  EK_root_911  BO_508 */
+/* %token lexer_D_916    /* :  lexer_D_704  EK_root_911  KE_551 */
+/* %token lexer_E_925    /* :  lexer_E_705  JEK_root_926 */
+/* %token lexer_F_930    /* :  lexer_F_706  JOIK_root_931 */
+/* %token lexer_G_935    /* :  lexer_G_707  GA_537 */
+/* %token lexer_H_940    /* :  lexer_H_708  GUhA_544 */
+/* %token lexer_I_945    /* :  lexer_I_709  NAhE_583  BO_508 */
+/* %token lexer_J_950    /* :  lexer_J_710  NA_578  KU_556 */
+/* %token lexer_K_955    /* :  lexer_K_711  I_432  BO_508 */
+/* %token lexer_L_960    /* :  lexer_L_712  number_root_961 */
+/* %token lexer_M_965    /* :  lexer_M_713  GIhEK_root_991  BO_508 */
+/* %token lexer_N_966    /* :  lexer_N_714  GIhEK_root_991  KE_551 */
+/* %token lexer_O_970    /* :  lexer_O_715  simple_tense_modal_972 */
+/* %token lexer_P_980    /* :  lexer_P_716  GIK_root_981 */
+/* %token lexer_Q_985    /* :  lexer_Q_717  lerfu_string_root_986 */
+/* %token lexer_R_990    /* :  lexer_R_718  GIhEK_root_991 */
+/* %token lexer_S_995    /* :  lexer_S_719  I_545 */
+/* %token lexer_T_1000   /* :  lexer_T_720  I_545  simple_JOIK_JEK_957 */
+/* %token lexer_U_1005   /* :  lexer_U_721  JEK_root_926  BO_508 */
+/* %token lexer_V_1010   /* :  lexer_V_722  JOIK_root_931  BO_508 */
+/* %token lexer_W_1015   /* :  lexer_W_723  JOIK_root_931  KE_551 */
+/* %token lexer_X_1020   /* null */
+/* %token lexer_Y_1025   /* :  lexer_Y_725  number_root_961  MOI_663 */
=20
=20
 %start text_0
=20
 %%
=20
 text_0                  :  te=
xt_A_1
                         |  indicators_411  text_A_1
                         |  free_modifier_32  text_A_1
                         |  cmene_404  text_A_1
@@ -498,22 +491,21 @@ the 900 series rules are found in the lexer.  */
                         ;
=20
 sumti_F_96              :  sumti_G_97
                            /* outer-quantified sumti */
                         |  quantifier_300  sumti_G_97
                         ;
=20
 sumti_G_97              :  qualifier_483  sumti_90  LUhU_=
gap_463
                         |  qualifier_483  relative_clauses_121
                                 sumti_90  LUhU_gap_463
-                           /*sumti grouping, set/mass/individual conversio=
n */
-                           /*also sumti scalar negation */
+                           /*sumti grouping, set/mass/individual conversio=
n; also sumti scalar negation */
                         |  anaphora_400
                         |  LA_499  cmene_404
                         |  LA_499  relative_clauses_121  cmene_404
                         |  LI_489  MEX_310  LOhO_gap_472
                         |  description_110
                         |  quote_arg_432
                         ;
=20
=20
=20
@@ -622,21 +614,21 @@ the 900 series rules are found in the lexer.  */
 /*  Main entry point for MEX; everything but a number must be in parens.  =
 */
=20
 quantifier_300          :  number_812  BOI_gap_461
                         |  left_bracket_470  MEX_310  right_bracket_gap_471
                         ;
=20
=20
=20
 /*  Entry point for MEX used after LI; no parens needed, but LI now has an
     elidable terminator. (This allows us to express the difference between
-    =93the expression a + b=94 and =93the expression (a + b)=94_)   */
+    =93the expression a + b=94 and =93the expression (a + b)=94 )   */
=20
 /*  This rule supports left-grouping infix expressions and reverse Polish
     expressions. To handle infix monadic, use a null operand; to handle
     infix with more than two operands (whatever that means) use an extra
     operator or an array operand.   */
=20
 MEX_310                 :  MEX_A_311
                         |  MEX_310  operator_370  MEX_A_311
                         |  FUhA_441  rp_expression_330
                         ;
@@ -1417,21 +1409,21 @@ the 900 series rules are found in the lexer.  */
                         ;
=20
=20
 lexer_N_966             :  lexer_N_714  GIhEK_root_991  KE_551
                         |  lexer_N_714  GIhEK_root_991  simple_tag_971  KE_551
                         ;
=20
=20
 lexer_O_970             :  lexer_O_715  simple_tense_modal_972
                         ;
-/* the following rule is a lexer version of non-terminal _815 for compound=
ing
+/* the following rule is a lexer version of non-terminal_815 for compoundi=
ng
    PU/modals; it disallows the lexer picking out FIhO clauses, which would
    require it to have knowledge of the main parser grammar */
=20
 simple_tag_971          :  simple_tense_modal_972
                         |  simple_tag_971  simple_JOIK_JEK_957
                                 simple_tense_modal_972
                         ;
=20
=20
 simple_tense_modal_972  :  simple_tense_modal_A_973
@@ -1467,22 +1459,22 @@ the 900 series rules are found in the lexer.  */
 /* baca'a =3D actually will be */
 /* bapu'i =3D can and will have */
 /* banu'o =3D can, but won=92t have yet */
 /* canu'ojebapu'i =3D can, hasn=92t yet, but will */
=20
 tense_C_979             :  time_1030
    /* time-only */
    /* space defaults to time-space reference space */
=20
                         |  space_1040
-   /* can include time if specified with VIhA */
-   /* otherwise time defaults to the time-space reference time */
+   /* can include time if specified with VIhA; otherwise time defaults to =
the
+      time-space reference time */
=20
                         |  time_1030  space_1040
    /* time and space - If space_1040 is marked with
    VIhA for space-time the tense may be self-contradictory */
    /* interval prop before space_time is for time distribution */
                         |  space_1040  time_1030
                         ;
=20
 lexer_P_980             :  lexer_P_716  GIK_root_981
                         ;
@@ -1710,20 +1702,821 @@ the 900 series rules are found in the lexer.  */
                            erases back to start of text which is the
                            beginning of a speaker=92s statement,
                            a parenthesis (TO/TOI), a LU/LIhU quote,
                            or a TUhE/TUhU utterance string.
                         ;
=20
=20
 */
 %%
 
+

2. YACC Grammar Cross-Reference

+
+
A_501
+
EK_root_911
+
anaphora_400
+
sumti_G_97
+
anything_699
+
token_1100, ZOI_quote_434=
+
any_word_698
+
null_1101, token_1100, <= a href=3D"#y434">ZOI_quote_434, ZO_quote_435
+
any_words_697
+
LOhU_quote_436
+
BAhE_503
+
token_1100
+
BAI_502
+
modal_A_975
+
BE_446
+
linkargs_160
+
BE_504
+
BE_446
+
BEhO_506
+
BEhO_gap_467
+
BEhO_gap_467
+
linkargs_160
+
BEI_442
+
links_161
+
BEI_505
+
BEI_442
+
BIhE_439
+
MEX_A_311
+
BIhE_650
+
BIhE_439
+
BIhI_507
+
interval_932
+
BO_479
+
selbri_F_136
+
BO_508
+
BO_479, lexer_C_915, lexer_I_945, lexer_K_955, lexer_M_965, lexer_U_1005, lexer_V_1010
+
BOI_651
+
BOI_gap_461, sub_gap_462 +
BOI_gap_461
+
anaphora_400, operand_C_385, quantifier_300
+
bridi_tail_50
+
bridi_tail_50, sentence_40 +
bridi_tail_A_51
+
bridi_tail_50, bridi_tail_A_51
+
bridi_tail_B_52
+
bridi_tail_A_51, bridi_tail_B_52=
+
bridi_tail_C_53
+
bridi_tail_B_52
+
bridi_valsi_407
+
tanru_unit_B_152
+
bridi_valsi_A_408
+
bridi_valsi_407
+
BRIVLA_509
+
bridi_valsi_A_408
+
BU_511
+
token_1100
+
BY_513
+
lerfu_word_987
+
CAhA_514
+
tense_B_978
+
CAI_515
+
indicator_413, token_1100=
+
CEhE_495
+
terms_B_82
+
CEhE_517
+
CEhE_495
+
CEI_444
+
tanru_unit_150
+
CEI_516
+
CEI_444
+
cmene_404
+
sumti_G_97, text_0, vocative_35
+
CMENE_518
+
cmene_A_405
+
cmene_A_405
+
cmene_404, cmene_A_405 +
CO_443
+
selbri_B_132
+
CO_519
+
CO_443
+
COI_416
+
COI_416, DOI_415
+
COI_520
+
COI_A_417
+
COI_A_417
+
COI_416
+
CU_521
+
front_gap_451
+
CUhE_522
+
simple_tense_modal_972
+
DAhO_524
+
indicator_413, token_1100=
+
description_110
+
sumti_G_97
+
discursive_bridi_34
+
free_modifier_A_33
+
DOhU_526
+
DOhU_gap_457
+
DOhU_gap_457
+
vocative_35
+
DOI_415
+
vocative_35
+
DOI_525
+
DOI_415
+
EK_802
+
fragment_20, JOIK_EK_421 +
EK_BO_803
+
operand_B_383, sumti_C_93 +
EK_KE_804
+
operand_381, sumti_A_91 +
EK_root_911
+
lexer_B_910, lexer_C_915, = lexer_D_916
+
error
+
BEhO_gap_467, BOI_gap_461,= DOhU_gap_457, FEhU_gap_458, gap_450, GEhU_gap_464, KEhE_gap_466, KEI_gap_453, KUhO_gap_469, LIhU_gap_448, LOhO_gap_472, LUhU_gap_463, ME= hU_gap_465, MEX_gap_452, NUhU_g= ap_460, right_bracket_gap_471, = right_br_no_free_474, SEhU_gap_459, sub_gap_462, TEhU_gap_473, TOI_gap_468, TUhU_gap_454, VAU= _gap_456
+
FA_481
+
mod_head_490
+
FA_527
+
FA_481
+
FAhA_528
+
space_direction_1048
+
FEhE_530
+
space_int_props_A_1050
+
FEhU_531
+
FEhU_gap_458
+
FEhU_gap_458
+
tense_modal_815
+
FIhO_437
+
tense_modal_815
+
FIhO_532
+
FIhO_437
+
FOI_533
+
lerfu_word_987
+
fragment_20
+
paragraph_10
+
free_modifier_32
+
anaphora_400, BE_446, BEhO_gap_467, BEI_442, BIhE_439, BO_479, BOI_gap= _461, bridi_valsi_407, CEhE_495= , CEI_444, cmene_404, CO_443, EK_802, EK= _BO_803, EK_KE_804, FA_481,= FEhU_gap_458, FIhO_437, free_modifier_32, front_gap_451, FUhA_441, gap_450, GEhU_gap_464, GEK_807, GIhEK_8= 18, GIhEK_BO_813, GIhEK_KE_814<= /a>, GIK_816, GOI_485, GUhEK_808, I_819, I_= BO_811, I_JEK_820, JAI_478,= JEK_BO_821, JOhI_431, JOIK_BO_822, JOIK_EK_421, JOIK_JEK_422, JOIK_KE_823, KE_493, KEhE_gap_466, KEI_gap= _453, KUhO_gap_469, LA_499,= LE_488, left_bracket_470, LI_489, LOhO_gap_472, LUhU_gap_463, MAhO_430, ME= _477, MEhU_gap_465, MEX_gap_452= , MEX_operator_374, MOhE_427, MOI_476, NA_445, NAhE_482, NAhE_BO_809, N= AhU_429, NA_KU_810, NIhE_428, NOI_484, NU_A_426, NUhA_475, NUhI_496, = NUhU_gap_460, para_mark_410, PE= hE_494, PEhO_438, qualifier_483= , quote_arg_432, right_bracket_= gap_471, SE_480, SEI_440, <= a href=3D"#y498">SOI_498, TEhU_gap_473, tense_modal_815, text_0, TUhE_447, TUhU_gap_454, VAU_= gap_456, VUhO_497, XI_424, = ZIhE_487, ZOhU_492
+
free_modifier_A_33
+
free_modifier_32
+
front_gap_451
+
discursive_bridi_34, sentence_40=
+
FUhA_441
+
MEX_310
+
FUhA_655
+
FUhA_441
+
FUhE_535
+
indicators_411, token_1100
+
FUhO_536
+
indicator_413, token_1100=
+
GA_537
+
lexer_G_935
+
GAhO_656
+
JOIK_root_931
+
gap_450
+
description_110, modifier_84, sumti_E_95
+
GEhU_538
+
GEhU_gap_464
+
GEhU_gap_464
+
relative_clause_122
+
GEK_807
+
gek_sentence_54, operand_C_385<= /a>, sumti_D_94, term_set_85
+
gek_sentence_54
+
bridi_tail_C_53, gek_sentence_54=
+
GI_539
+
GIK_root_981, lexer_G_935<= /dd> +
GIhA_541
+
GIhEK_root_991
+
GIhEK_818
+
bridi_tail_A_51, fragment_20=
+
GIhEK_BO_813
+
bridi_tail_B_52
+
GIhEK_KE_814
+
bridi_tail_50
+
GIhEK_root_991
+
lexer_M_965, lexer_N_966, = lexer_R_990
+
GIK_816
+
gek_sentence_54, GUhEK_selbri_1= 37, operand_C_385, operator_A_3= 71, sumti_D_94, term_set_85 +
GIK_root_981
+
lexer_G_935, lexer_P_980 +
GOhA_543
+
bridi_valsi_A_408
+
GOI_485
+
relative_clause_122
+
GOI_542
+
GOI_485
+
GUhA_544
+
lexer_H_940
+
GUhEK_808
+
GUhEK_selbri_137, operator_A_3= 71
+
GUhEK_selbri_137
+
selbri_F_136
+
I_545
+
I_root_956, lexer_S_995, <= a href=3D"#y1000">lexer_T_1000
+
I_819
+
paragraph_10, text_B_2
+
I_BO_811
+
statement_B_13, text_B_2
+
I_JEK_820
+
statement_A_12, text_B_2
+
indicator_413
+
indicators_A_412
+
indicators_411
+
text_0
+
indicators_A_412
+
indicators_411, indicators_A_4= 12
+
interval_932
+
JOIK_root_931
+
interval_property_1051
+
space_int_props_A_1050, time= _int_props_1036
+
I_root_956
+
lexer_K_955
+
JA_546
+
JEK_root_926
+
JAI_478
+
tanru_unit_B_152
+
JAI_547
+
JAI_478
+
JEK_805
+
JOIK_JEK_422, simple_JOIK_JEK_= 957
+
JEK_BO_821
+
operator_A_371, selbri_E_135
+
JEK_root_926
+
lexer_E_925, lexer_U_1005=
+
JOhI_431
+
operand_C_385
+
JOhI_657
+
JOhI_431
+
JOI_548
+
JOIK_root_931
+
JOIK_806
+
JOIK_EK_421, JOIK_JEK_422,= simple_JOIK_JEK_957
+
JOIK_BO_822
+
operand_B_383, operator_A_371<= /a>, selbri_E_135, sumti_C_93 +
JOIK_EK_421
+
operand_A_382, sumti_B_92 +
JOIK_JEK_422
+
NU_425, operator_370, selbri_D_134, tag_491, terms_A_81, text_A_1
+
JOIK_KE_823
+
operand_381, operator_370,= selbri_D_134, sumti_A_91
+
JOIK_root_931
+
lexer_F_930, lexer_G_935, = lexer_V_1010, lexer_W_1015 +
KE_493
+
gek_sentence_54, operator_B_372= , tanru_unit_B_152
+
KE_551
+
KE_493, lexer_D_916, lexer_N_966, lexer_W_1015
+
KEhE_550
+
KEhE_gap_466
+
KEhE_gap_466
+
bridi_tail_50, gek_sentence_54, operand_381, operator_370, = operator_B_372, selbri_D_134, <= a href=3D"#y91">sumti_A_91, tanru_unit_B_152
+
KEI_552
+
KEI_gap_453
+
KEI_gap_453
+
tanru_unit_B_152
+
KI_554
+
simple_tense_modal_972, simple= _tense_modal_A_973, tense_A_977
+
KOhA_555
+
anaphora_400
+
KU_556
+
gap_450, lexer_J_950
+
KUhE_658
+
MEX_gap_452
+
KUhO_557
+
KUhO_gap_469
+
KUhO_gap_469
+
relative_clause_122
+
LA_499
+
description_110, sumti_G_97=
+
LA_558
+
LA_499
+
LAhE_561
+
qualifier_483
+
LAU_559
+
lerfu_word_987
+
LE_488
+
description_110
+
LE_562
+
LE_488
+
left_bracket_470
+
quantifier_300, subscript_486<= /a>
+
LEhU_565
+
LOhU_quote_436
+
lerfu_string_817
+
anaphora_400, operand_C_385, subscript_486
+
lerfu_string_root_986
+
lerfu_string_root_986, lerfu_w= ord_987, lexer_Q_985, lexer_Y_= 1025, utt_ordinal_root_906
+
lerfu_word_987
+
lerfu_string_root_986, lerfu_w= ord_987, number_root_961
+
lexer_A_701
+
lexer_A_905
+
lexer_A_905
+
utterance_ordinal_801
+
lexer_B_702
+
lexer_B_910
+
lexer_B_910
+
EK_802
+
lexer_C_703
+
lexer_C_915
+
lexer_C_915
+
EK_BO_803
+
lexer_D_704
+
lexer_D_916
+
lexer_D_916
+
EK_KE_804
+
lexer_E_705
+
lexer_E_925
+
lexer_E_925
+
JEK_805
+
lexer_F_706
+
lexer_F_930
+
lexer_F_930
+
JOIK_806
+
lexer_G_707
+
lexer_G_935
+
lexer_G_935
+
GEK_807
+
lexer_H_708
+
lexer_H_940
+
lexer_H_940
+
GUhEK_808
+
lexer_I_709
+
lexer_I_945
+
lexer_I_945
+
NAhE_BO_809
+
lexer_J_710
+
lexer_J_950
+
lexer_J_950
+
NA_KU_810
+
lexer_K_711
+
lexer_K_955
+
lexer_K_955
+
I_BO_811
+
lexer_L_712
+
lexer_L_960
+
lexer_L_960
+
number_812
+
lexer_M_713
+
lexer_M_965
+
lexer_M_965
+
GIhEK_BO_813
+
lexer_N_714
+
lexer_N_966
+
lexer_N_966
+
GIhEK_KE_814
+
lexer_O_715
+
lexer_O_970
+
lexer_O_970
+
tense_modal_815
+
lexer_P_716
+
lexer_P_980
+
lexer_P_980
+
GIK_816
+
lexer_Q_717
+
lexer_Q_985
+
lexer_Q_985
+
lerfu_string_817
+
lexer_R_718
+
lexer_R_990
+
lexer_R_990
+
GIhEK_818
+
lexer_S_719
+
lexer_S_995
+
lexer_S_995
+
I_819
+
lexer_T_1000
+
I_JEK_820
+
lexer_T_720
+
lexer_T_1000
+
lexer_U_1005
+
JEK_BO_821
+
lexer_U_721
+
lexer_U_1005
+
lexer_V_1010
+
JOIK_BO_822
+
lexer_V_722
+
lexer_V_1010
+
lexer_W_1015
+
JOIK_KE_823
+
lexer_W_723
+
lexer_W_1015
+
lexer_Y_1025
+
PA_MOI_824
+
lexer_Y_725
+
lexer_Y_1025
+
LI_489
+
sumti_G_97
+
LI_566
+
LI_489
+
LIhU_567
+
LIhU_gap_448
+
LIhU_gap_448
+
quote_arg_A_433
+
linkargs_160
+
fragment_20, tanru_unit_A_151
+
links_161
+
fragment_20, linkargs_160, = links_161
+
LOhO_568
+
LOhO_gap_472
+
LOhO_gap_472
+
sumti_G_97
+
LOhU_569
+
LOhU_quote_436
+
LOhU_quote_436
+
quote_arg_A_433
+
LU_571
+
quote_arg_A_433
+
LUhU_573
+
LUhU_gap_463
+
LUhU_gap_463
+
operand_C_385, sumti_G_97 +
MAhO_430
+
MEX_operator_374
+
MAhO_662
+
MAhO_430
+
MAI_661
+
utt_ordinal_root_906
+
ME_477
+
tanru_unit_B_152
+
ME_574
+
ME_477
+
MEhU_575
+
MEhU_gap_465
+
MEhU_gap_465
+
tanru_unit_B_152
+
MEX_310
+
MEX_310, MEX_operator_374,= quantifier_300, subscript_486,= sumti_G_97
+
MEX_A_311
+
MEX_310, MEX_A_311
+
MEX_B_312
+
MEX_A_311, MEX_C_313
+
MEX_C_313
+
MEX_B_312, MEX_C_313, operand_C_385
+
MEX_gap_452
+
MEX_B_312
+
MEX_operator_374
+
MEX_operator_374, operator_B_3= 72, tanru_unit_B_152
+
modal_974
+
simple_tense_modal_A_973
+
modal_A_975
+
modal_974
+
mod_head_490
+
modifier_84
+
modifier_84
+
term_83
+
MOhE_427
+
operand_C_385
+
MOhE_664
+
MOhE_427
+
MOhI_577
+
space_motion_1041
+
MOI_476
+
tanru_unit_B_152
+
MOI_663
+
lexer_Y_1025, MOI_476 +
NA_445
+
fragment_20, gek_sentence_54= , selbri_A_131
+
NA_578
+
EK_root_911, GIhEK_root_991, JEK_root_926, lexer_J_950, <= a href=3D"#y445">NA_445
+
NAhE_482
+
MEX_operator_374, selbri_F_136= , tanru_unit_B_152
+
NAhE_583
+
lexer_I_945, NAhE_482, simple_tense_modal_972
+
NAhE_BO_809
+
qualifier_483
+
NAhU_429
+
MEX_operator_374
+
NAhU_665
+
NAhU_429
+
NAI_581
+
COI_A_417, EK_root_911, GIhEK_root_991, GIK_root_981, indicator_413, interval_932, interval_property_1051, JEK_root_926, JOIK_root_931, lexer_G_935, = lexer_H_940, modal_974, NU_A_426, space_direction_1048, text_0, time_direction_1035, token_1100
+
NA_KU_810
+
term_83
+
NIhE_428
+
operand_C_385
+
NIhE_666
+
NIhE_428
+
NIhO_584
+
para_mark_410
+
NOI_484
+
relative_clause_122
+
NOI_585
+
NOI_484
+
NU_425
+
NU_425, tanru_unit_B_152 +
NU_586
+
NU_A_426
+
NU_A_426
+
NU_425
+
NUhA_475
+
tanru_unit_B_152
+
NUhA_667
+
NUhA_475
+
NUhI_496
+
term_set_85
+
NUhI_587
+
NUhI_496
+
NUhU_588
+
NUhU_gap_460
+
NUhU_gap_460
+
term_set_85
+
number_812
+
quantifier_300, subscript_486<= /a>
+
number_root_961
+
interval_property_1051, lexer= _L_960, lexer_Y_1025, number_r= oot_961, utt_ordinal_root_906
+
operand_381
+
MEX_B_312, operand_381, operand_C_385, rp_operand_332 +
operand_A_382
+
operand_381, operand_A_382=
+
operand_B_383
+
operand_A_382, operand_B_383
+
operand_C_385
+
operand_B_383, operand_C_385
+
operator_370
+
MEX_310, MEX_A_311, MEX_B_312, operator_370, operator_B_372, rp_expression_330
+
operator_A_371
+
operator_370, operator_A_371
+
operator_B_372
+
operator_A_371
+
PA_672
+
lerfu_string_root_986, number_= root_961
+
PA_MOI_824
+
bridi_valsi_A_408
+
paragraph_10
+
paragraph_10, paragraphs_4 +
paragraphs_4
+
paragraphs_4, text_C_3
+
para_mark_410
+
paragraphs_4, para_mark_410,= text_B_2
+
parenthetical_36
+
free_modifier_A_33
+
PEhE_494
+
terms_A_81
+
PEhE_591
+
PEhE_494
+
PEhO_438
+
MEX_B_312
+
PEhO_673
+
PEhO_438
+
prenex_30
+
fragment_20, statement_11, <= a href=3D"#y41">subsentence_41
+
PU_592
+
time_direction_1035
+
qualifier_483
+
operand_C_385, sumti_G_97 +
quantifier_300
+
fragment_20, operand_C_385,= sumti_E_95, sumti_F_96, sumti_tail_A_112
+
quote_arg_432
+
sumti_G_97
+
quote_arg_A_433
+
quote_arg_432
+
RAhO_593
+
bridi_valsi_A_408
+
relative_clause_122
+
relative_clauses_121
+
relative_clauses_121
+
fragment_20, relative_clauses_1= 21, sumti_90, sumti_E_95, sumti_G_97, sumti_tail_111, sumti_tail_A_112, vocative_35
+
right_bracket_gap_471
+
quantifier_300
+
right_br_no_free_474
+
subscript_486
+
ROI_594
+
interval_property_1051
+
rp_expression_330
+
MEX_310, rp_operand_332 +
rp_operand_332
+
rp_expression_330
+
SA_595
+
null_1101
+
SE_480
+
MEX_operator_374, tanru_unit_B= _152
+
SE_596
+
EK_root_911, GIhEK_root_991, interval_932, JEK_root_926, = JOIK_root_931, lexer_G_935, lexer_H_940, modal_A_975, SE_480
+
SEhU_598
+
SEhU_gap_459
+
SEhU_gap_459
+
discursive_bridi_34
+
SEI_440
+
discursive_bridi_34
+
SEI_597
+
SEI_440
+
selbri_130
+
bridi_tail_C_53, discursive_brid= i_34, GUhEK_selbri_137, MEX_ope= rator_374, operand_C_385, selbr= i_A_131, sumti_E_95, sumti_tail_= A_112, tense_modal_815, vocative= _35
+
selbri_A_131
+
selbri_130
+
selbri_B_132
+
selbri_A_131, selbri_B_132=
+
selbri_C_133
+
selbri_B_132, selbri_C_133= , selbri_D_134, tanru_unit_B_152
+
selbri_D_134
+
selbri_C_133, selbri_D_134=
+
selbri_E_135
+
selbri_D_134, selbri_E_135=
+
selbri_F_136
+
GUhEK_selbri_137, selbri_E_135= , selbri_F_136
+
sentence_40
+
statement_C_14, subsentence_41
+
SI_601
+
null_1101
+
simple_JOIK_JEK_957
+
I_root_956, lexer_T_1000,= simple_tag_971
+
simple_tag_971
+
lexer_C_915, lexer_D_916, = lexer_G_935, lexer_K_955, lexer_M_965, lexer_N_966, lexer_U_1005, lexer_V_1010, lexer_W_1015, simple_tag_971
+
simple_tense_modal_972
+
lexer_O_970, simple_tag_971
+
simple_tense_modal_A_973
+
simple_tense_modal_972
+
SOI_498
+
discursive_bridi_34
+
SOI_602
+
SOI_498
+
space_1040
+
tense_C_979
+
space_A_1042
+
space_1040
+
space_B_1043
+
space_A_1042
+
space_C_1044
+
space_B_1043, space_C_1044
+
space_direction_1048
+
space_intval_1046, space_off= set_1045
+
space_int_props_1049
+
space_int_props_1049, space_= intval_1046
+
space_int_props_A_1050
+
space_int_props_1049
+
space_intval_1046
+
space_B_1043
+
space_intval_A_1047
+
space_intval_1046
+
space_motion_1041
+
space_1040
+
space_offset_1045
+
space_C_1044, space_motion_1= 041
+
statement_11
+
paragraph_10, statement_11 +
statement_A_12
+
statement_11, statement_A_12=
+
statement_B_13
+
statement_A_12, statement_B_13
+
statement_C_14
+
statement_B_13
+
SU_603
+
null_1101
+
sub_gap_462
+
subscript_486
+
subscript_486
+
free_modifier_A_33
+
subsentence_41
+
gek_sentence_54, relative_claus= e_122, subsentence_41, tanru_uni= t_B_152
+
sumti_90
+
discursive_bridi_34, modifier_84= , operand_C_385, sumti_A_91,= sumti_D_94, sumti_G_97, sumti_tail_A_112, tanru_unit_B_152, term_83, vocative_35
+
sumti_A_91
+
sumti_90
+
sumti_B_92
+
sumti_A_91, sumti_B_92
+
sumti_C_93
+
sumti_B_92, sumti_C_93
+
sumti_D_94
+
sumti_C_93, sumti_D_94
+
sumti_E_95
+
sumti_D_94
+
sumti_F_96
+
sumti_E_95
+
sumti_G_97
+
sumti_F_96, sumti_tail_111<= /dd> +
sumti_tail_111
+
description_110
+
sumti_tail_A_112
+
sumti_tail_111
+
tag_491
+
gek_sentence_54, mod_head_490, selbri_130, statement_C_14, = tag_491, tanru_unit_B_152
+
TAhE_604
+
interval_property_1051
+
tail_terms_71
+
bridi_tail_50, bridi_tail_A_51, bridi_tail_B_52, bridi_tail_C_53<= /a>, gek_sentence_54
+
tanru_unit_150
+
selbri_F_136, tanru_unit_150
+
tanru_unit_A_151
+
tanru_unit_150
+
tanru_unit_B_152
+
tanru_unit_A_151, tanru_unit_B= _152
+
TEhU_675
+
TEhU_gap_473
+
TEhU_gap_473
+
MEX_operator_374, operand_C_38= 5
+
TEI_605
+
lerfu_word_987
+
tense_A_977
+
simple_tense_modal_A_973
+
tense_B_978
+
tense_A_977
+
tense_C_979
+
tense_B_978
+
tense_modal_815
+
tag_491
+
term_83
+
linkargs_160, links_161, <= a href=3D"#y122">relative_clause_122, terms_B_82 +
terms_80
+
discursive_bridi_34, fragment_20= , prenex_30, sentence_40, tail_terms_71, terms_80, term_set_85
+
terms_A_81
+
terms_80, terms_A_81
+
terms_B_82
+
terms_A_81, terms_B_82
+
term_set_85
+
term_83
+
text_0
+
parenthetical_36, quote_arg_A_4= 33, text_0
+
text_A_1
+
text_0
+
text_B_2
+
statement_C_14, text_A_1, text_B_2
+
text_C_3
+
null_1101, text_B_2
+
time_1030
+
tense_C_979
+
time_A_1031
+
time_1030
+
time_B_1032
+
time_A_1031, time_B_1032=
+
time_direction_1035
+
time_interval_1034, time_off= set_1033
+
time_interval_1034
+
time_A_1031
+
time_int_props_1036
+
time_interval_1034, time_int= _props_1036
+
time_offset_1033
+
time_B_1032
+
TO_606
+
parenthetical_36
+
TOI_607
+
TOI_gap_468
+
TOI_gap_468
+
parenthetical_36
+
TUhE_447
+
statement_C_14
+
TUhE_610
+
TUhE_447
+
TUhU_611
+
TUhU_gap_454
+
TUhU_gap_454
+
statement_C_14
+
UI_612
+
indicator_413, token_1100=
+
utterance_20
+
null_1101
+
utterance_ordinal_801
+
free_modifier_A_33
+
utt_ordinal_root_906
+
lexer_A_905
+
VA_613
+
space_A_1042, space_offset_1= 045
+
VAU_614
+
VAU_gap_456
+
VAU_gap_456
+
fragment_20, tail_terms_71 +
VEhA_615
+
space_intval_A_1047
+
VEhO_678
+
right_bracket_gap_471, right_b= r_no_free_474
+
VEI_677
+
left_bracket_470
+
VIhA_616
+
space_intval_A_1047
+
vocative_35
+
free_modifier_A_33
+
VUhO_497
+
sumti_90
+
VUhO_617
+
VUhO_497
+
VUhU_679
+
MEX_operator_374
+
XI_424
+
subscript_486
+
XI_618
+
XI_424
+
Y_619
+
indicator_413, token_1100=
+
ZAhO_621
+
interval_property_1051
+
ZEhA_622
+
time_interval_1034
+
ZI_624
+
time_1030, time_offset_1033<= /a>
+
ZIhE_487
+
relative_clauses_121
+
ZIhE_625
+
ZIhE_487
+
ZO_626
+
ZO_quote_435
+
ZOhU_492
+
prenex_30
+
ZOhU_628
+
ZOhU_492
+
ZOI_627
+
ZOI_quote_434
+
ZOI_quote_434
+
quote_arg_A_433
+
ZO_quote_435
+
quote_arg_A_433
+
+ +
diff --git a/21/2/index.html b/21/2/index.html index 19a5ba5..defc1d8 100644 --- a/21/2/index.html +++ b/21/2/index.html @@ -18,22 +18,22 @@

2. EBNF Grammar of Lojban

-Lojban Machine Grammar, EBNF Version, 3rd Baseline as of 10 January 1997 -

This document is explicitly dedicated to the public domain by its autho= r, the Logical Language Group Inc. Contact that organization at: 2904 Beau = Lane, Fairfax VA 22031 USA 703-385-0273 (intl: +1 703 385 0273)

+Lojban Machine Grammar, EBNF Version, Final Baseline +

This EBNF document is explicitly dedicated to the public domain by its = author, The Logical Language Group, Inc. Contact that organization at: 2904= Beau Lane, Fairfax VA 22031 USA 703-385-0273 (intl: +1 703 385 0273)

Explanation of notation: All rules have the form:

namenumber =3D bnf-expression


which means that the grammatical construct =93name=94 is defined by =93bnf= -expression=94. The number cross-references this grammar with the rule numb= ers in the YACC grammar. The names are the same as those in the YACC gramma= r, except that subrules are labeled with A, B, C, ... in the YACC grammar a= nd with 1, 2, 3, ... in this grammar. In addition, rule 971 is =93simple_ta= g=94 in the YACC grammar but =93stag=94 in this grammar, because of its fre= quent appearance.
  1. Names in lower case are grammatical constructs.
  2. Names in UPPER CASE are selma'o (lexeme) names, and are terminals.
  3. Concatenation is expressed by juxtaposition with no operator symbol.
  4. | represents alternation (choice).
  5. --=20 You received this message because you are subscribed to the Google Groups "= BPFK" group. To post to this group, send email to bpfk-list@googlegroups.com. To unsubscribe from this group, send email to bpfk-list+unsubscribe@googleg= roups.com. For more options, visit this group at http://groups.google.com/group/bpfk-l= ist?hl=3Den.