[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: semantics ...
- Subject: Re: semantics ...
- From: Robin Turner <robin@bilkent.edu.tr>
- Date: Sun, 04 Apr 1999 16:23:48 +0300
la sidirait. cusku di'e
> > One solution would be to adopt a feature-based analysis
> > of the gismu involved, using features which are, as far
> > as possible, consistent across cultures.
>
> > An alternative ... use the Natural Semantic Model ... which
> > aims to define terms using a limited number of universally
> > accepted words (I think the current total is 90).
>
> In the end, aren't these two methods basically the same?
> In the final analysis, don't they both say ...
>
> There is a small, enumerated list of points in semantic
> space which have the following properties:
>
> 1. Every language has a word at that location
> 2. Between them they cover *all* of semantic space.
>
> ??
Well, yes, more or less, which is what I find questionable about both,
especially the second claim. OK, maybe you could produce a minimal set of
words to cover all semantic space, but it would end up with some hopelessly
vague categories, in the same way that Klingon covers all of grammatical
space by classifying words as "nouns, verbs and everything else".
However, if we leave the more exaggerated claims of both methods aside, and
concentrate on what they can do in practice, I think it's largely a matter
of taste as to which one you use. In my own work I use a variable weighted
feature approach, since that I find that is the most convenient way of
describing the structure of a category, but NSM may be more useful for
definition-writing. For example
ninmu
x1 [is female] AND {[is human] OR [is like a human]}
Obviously with 5-place gismu it gets a bit more complicated.
Also, in compiling a master definitions list, you don't have to break every
entry down into your minimal set of words/features - you can define the more
common compounds at the start.
co'o mi'e robin.