Morphological analyser

From LING073
Jump to: navigation, search

Morphological transducers

A morphological transducer is just a directed graph. It consists of nodes (numbered below) and arcs (with labels), with a starting node (0 below) and an ending node (16 below).

Simple transducer.png

You follow the arcs that are available from your input. The only acceptable paths are ones that start from starting node and end at the ending node. You may match your input to either side of the arc's label (separated by : above), and the other side is returned as output.

In the transducer above, the left side is the form and the right side is the analysis. If you match your input to the left side (the form), then your output will be the right side (the analysis)—this is morphological analysis. Likewise, if you follow the transducer by matching your input to the right side (the analysis) and output the left side (the form), then you are performing morphological generation.

An example of a complete path is w:w o:o l:l v:f e:<n> s:<pl>. The left/form side of this spells wolves and the right/analysis side of this spells wolf<n><pl>. Mapping between one and the other is as simple as taking one as input and following the path—by outputting the other side of each arc, you will get the other as output!

Question: What are all the possible paths provided by this transducer?

The formalism we use (lexd)

Transducers are pretty cool, and quite efficient... for computers. Following paths by hand is tedious, and drawing a transducer for anything more complex than the example above is torture. See the transducer below for Tuvan.

Tuvan transducer.png

This transducer provides the combinations of about 8 case marker, 5 possessive morphemes, and the plural marker for three Tuvan nouns.

An example is өг>{L}{A}р>{i}м>{D}{A}н mapping to өг<n><pl><px1sg><abl>, meaning "from my houses". The analysis side is clear to anyone familiar with tags (and knowing that "өг" means "house"). The form side is actually something that will get fixed by morphophonology, which we'll worry about later (for now: letters like {L} can be realised in a variety of ways, and > is used as a morpheme boundary); the actual orthographic form is өглеримден.

Question: How can we quantify the complexity of this graph?

Fortunately, we don't have to draw this graph by hand. We can simply define the various sections of it and link them together with a straightforward formalism called lexd. A section of a lexd file that corresponds to the graph above looks like the following:

PATTERNS

N-Stems [ <n>: ] [ <pl>:>{L}{A}р ]? Possession? Cases


LEXICON N-Stems

өг:өг     # "yurt"
аът:аът   # "horse"
ном:ном   # "book"

LEXICON Possession

<px1sg>:>{i}м
<px2sg>:>{i}ң
<px3sp>:>{z}{I}{n}
<px1pl>:>{i}в{I}с
<px2pl>:>{i}ң{A}р

LEXICON Cases

<nom>:
<gen>:>{N}{I}ң
<acc>:>{N}I
<dat>:>{G}{A}
<loc>:>{D}{A}
<abl>:>{D}{A}н
<all>:>{J}е
<all>:>{D}{I}в{A}   # Dir/LR

Questions

  • What is # doing?
  • What is : doing?
  • What are the ?s doing?
  • How are the lists of parts (LEXICONs) combined into a whole?
  • What is not mentioned in this code that is in the graph above?
  • Can you match sections of the graph to sections of the code?

Additional nuances

Phonology

The symbols like {L} above will need to be realised as different characters in different context.

For any symbols in your language that will be realised in different ways in different environments, you'll want to set up such an "archiphoneme". Use an uppercase letter for something that just has different forms, and use a lowercase letter for something that is inserted or deleted (i.e., is sometimes realised as nothing).

For now, it will suffice to define all the ways in which each archiphoneme surfaces by making a list in your twol file. This essentially allows all of the options to surface, which means you will be able to analyse incorrect forms as well as correct ones. Later, when you make a generator, you'll write rules to constrain where each of the symbols can occur.

Defining symbols

You'll want to define your archiphoneme symbols in the twol file, each with all its possible outputs.

So your twol file should contain an Alphabet section, which lists all the characters of the alphabet, and then all the archiphonemes with all their realisations. You will also want the > morpheme separator and some punctuation marks, all escaped. A condensed example for Tuvan follows:

Alphabet

   А Б В Г Д Е Ё Ж З И Й К Л М Н Ң О Ө П Р С Т У Ү Ф Х Ц Ч Ш Щ Ъ Ы Ь Э Ю Я
   а б в г д е ё ж з и й к л м н ң о ө п р с т у ү ф х ц ч ш щ ъ ы ь э ю я

   %{A%}:а %{A%}:е
   %{L%}:л %{L%}:н
   %{i%}:0 %{i%}:ы %{i%}:и %{i%}:у %{i%}:ү

   %.
   %-

   %>:0 
 ;

Starting point

You'll need at least one PATTERNS section in your lexd file. Bootstrapping a new language module per the instructions will create this for you, but don't forget that it's a thing!

Morphology that isn't suffixes

You may remember from last week that we discussed that analyses are generally in the form stem + POS tags + subcategory tags + function tags. What if some of your functional morphology occurs before the stem?

You can certainly implement that as shown above, but there's a problem: your tags will occur in the middle of the analysis. That is, instead of something like do<v><tv><rep><prc> ↔ redoing, you'd get something like <rep>do<v><tv><prog> ↔ redoing. This is undesirable in terms of keeping track of which of your tags are for what.

Fortunately lexd offers a trick for handling such things! You can list the left and right side of a given LEXICON in different parts of a PATTERN, and the pieces will be matched. The following is a simple example (which assumes that the "re" prefix in English should be treated as productive and inflectional, which it probably shouldn't):

PATTERNS

Verbs

PATTERN Verbs

V-Base V-Tenses

PATTERN V-Base

V-Stems(1) [<v>:] V-Stems(2):
V-Prefixes(1) V-Stems(1) [<v>:] V-Stems(2): V-Prefixes(2)

LEXICON V-Prefixes(2)

:re>   <rep>:
:un>   <rev>:

LEXICON V-Tenses

<prog>:>ing
<past>:>{e}d

LEXICON V-Stems(2)

tie:tie      <tv>
start:start  <tv>

Questions:

  • What are the forms output by this transducer?
  • (How many forms are there?)
  • What are the numbers in ()?
  • What part of the file adds the prefixes and what part of the file adds the prefix tags?
  • What are the different PATTERN groups for?

another example

Here's a better example, showing (productive, inflectional) gender agreement on verbs in Avar. Note the other way of matching prefixes to tags (the V-Gender lexicon as compared to V-Prefixes in English).

PATTERNS

Verbs

PATTERN Verbs

:V-Gender V-Stems(1) [<v>:] V-Stems(2): V-Tense V-Gender:

LEXICON V-Tense

<aor>:>уна

LEXICON V-Gender

<nt>:б>
<m>:в>
<f>:й>
<pl>:р>

LEXICON V-Stems(2)

бицине:иц     <tv>  # "говорить"

The output analyses would be the following:

бицине<v><tv><aor><nt>:бицуна
бицине<v><tv><aor><f>:йицуна
бицине<v><tv><aor><m>:вицуна
бицине<v><tv><aor><pl>:рицуна

In-class exercise

See Morphological analyser/Exercises.

The work we did on this in class is available on Swarthmore's github at ling073-sp23/ling073-eng.

Documentation

Documentation of lexd is available.

Evaluation

This section lists various commands that will help you evaluate your progress / how well the analyser is functioning.

Individual forms

To test whether/how your analyser is analysing a form, you can run the following:

echo "form" | apertium -d /path/to/analyser/ xyz-morph

An example might be the following:

apertium-tyv$ echo өглеримден | apertium -d . tyv-morph
^өглеримден/өг<n><pl><px1sg><abl>$^./.<sent>$

You can also do it this way:

apertium-tyv$ echo өглеримден | hfst-proc tyv.automorf.hfst

This output means that for the form өглеримден there is one analysis: өг<n><pl><px1sg><abl>. A form with multiple analysis would have them separated by /, like the following:

^өг/өг<n><nom>/өг<n><attr>/өг<n><nom>+э<cop><aor><p3><sg>$^./.<sent>$

A form with no analyses in the transducer will just return the form with an * before it, like the following:

^өглеримнен/*өглеримнен$^./.<sent>$

See full contents of analyser

The following command outputs the full contents of the transducer. Note that it only does one cycle through the graph, which means for numbers you'll only get double digits. Without -c0 (or -c1, etc.—to tell it how many times to cycle through) it will cycle indefinitely, which probably isn't what you want.

hfst-expand -c0 xyz.automorf.hfst

If all the numbers are annoying, you can do this to get only non-number contents:

hfst-expand -c0 xyz.automorf.hfst | grep -v '<num>'

Likewise, you can focus on a particular part of speech, e.g. <n>s:

hfst-expand -c0 xyz.automorf.hfst | grep '<n>'

A long list of forms with known analyses

To test whether your analyser is analysing forms correctly, you can

  1. put your analyses or forms into a file in test/ and run apertium-regtest.
  2. put your analyses into a yaml file and use morph-test (or aq-morftest):
    morph-test -csi xyz.yaml | most
    or
    aq-morftest -csi xyz.yaml | most

Coverage over a corpus

To test coverage over a corpus, you can use coverage-hfst or aq-covtest:

coverage-hfst xyz.corpus.basic.txt /path/to/xyz.automorf.hfst
aq-covtest xyz.corpus.basic.txt /path/to/xyz.automorf.bin

Generating forms

If you need to test how a form generates, you can do something like the following:

echo "^house<n><pl>$" | apertium -d . -f none xyz-gener

This will return all forms currently being generated, e.g. houses/housees

But note that the next lab will be about morphological generation.

Counting lexicon entries

You can run the following command to list lexicons and counts in them all:

lexd -x apertium-xyz.xyz.lexd > /dev/null

Note that a number of these lexicons are probably related to your morphology. If you're counting "lexical entries", you should probably exclude the morphology ones.

The assignment

This assignment will be due at the end of the day Friday of the 6th week of class (this semester: 23:59 on Friday, February 24th, 2023).

This assignment is to develop a morphological analyser that implements a good deal of the basic morphology of your language.

Getting set up

  1. Bootstrap a transducer for your language using apertium-init (installed on the lab machines):
    apertium-init -a lexd --with-spellrelax --prefix=ling073 xyz
  2. Create a new empty repo (that is, don't check the README option) in the course's GitHub organisation with the name ling073-xyz (with your language's code in place of xyz). Then add the SSH link as a remote origin in your initialised module and push the module to the GitHub repo:
    git remote add origin git@github.swarthmore.edu:Ling073-sp23/ling073-xyz.git
    git push --set-upstream origin master
  3. Go into the new directory (cd ling073-xyz), initialise the module (./autogen.sh), and compile it (make).
    • If this is successful, you should have several "modes" available; run apertium -d . -l to see.
    • One mode should be an xyz-morph mode; this is your analyser. Check it by running echo "houses" | apertium -d . xyz-morph , which should give you a morphological analysis of the word "houses".
  4. Integrate any comments I've provided to you on your grammar documentation page so that all of your morphTests are in good order. See the sanity checks at Grammar documentation#Sanity checks to check the main things.
  5. Augment the commented section at the top of the apertium-xyz.xyz.lexd file with any tags you came up with during the Grammar documentation assignment that aren't there already. Provide a symbol, and a brief comment explaining what the symbol means.
  6. Add all the characters of your language's orthography to the Alphabet section of the apertium-xyz.xyz.twol file. You may need to add archiphonemes later.
  7. Use the morphTests2regtest script (installed on the lab machines) to create a set of test files in a subdirectory called test/.
    • Commit these files to the git repo!
    • (You can remove empty files if you like.)
    • There should be at least 50 tests in the "-morph" files in this directory. You can make sure you have enough by counting lines in the *-morph-input.txt files; this will count the number of tests in each file and also return a total:
      wc -l test/*/*-morph-input.txt

The hard stuff

  1. Build your morphological transducer, adding all of the stems from your Grammar documentation assignment, categorised correctly, so that at least half of your tests pass. You'll need to build up the morphotactics too.
    • If too many of your grammar points are too hard to implement at this point (e.g., require some rules to change some characters to other characters), then you can skip one or two of them and instead add more "easy" forms to your transducer.
    • Alternatively "hard-code" some forms, but add a comment in the lexd file near the relevant forms indicating that they need further work, and mention it on the wiki page (see below).
    • Also don't forget to clean up your grammar page. Don't forget to rerun the scraping script to get a fresh version of the tests file. If you're happy with your grammar page and don't expect to change it much, feel free to "clean up" your tests file manually.
  2. Create a page on the wiki Language/Transducer that links to the code and has Evaluation and Notes sections.

Evaluation

When you've finished getting half of your tests to pass.

Evaluate coverage on your corpus and add the one of the most frequent unanalysed words:

  1. Use coverage-hfst or aq-covtest (as above) to see how many forms in your basic corpus are analysed, and what the top unknown forms are.
    • Make note of the coverage at this point
  2. Make a new txt file in your tests directory with the top unanalysed words.
    • Name the file something like commonwords-morph-input.txt
    • For each analysis, just put an <unk> tag (for "unknown") after each form.
    • You'll need to add these files to your test/tests.json file. See Apertium-regtest#Manually adding tests for more information on manually adding tests.
    • Don't forget to commit these files to your git repository!
  3. Figure out what the analyses of at least three of these words should be, using the resources you have available (grammar books, etc.), and add gold accordingly.
  4. Add at least one of these analyses to your transducer so that the test passes.
  5. Rerun apertium-regtest test to see by how much your coverage improved.
    • Add a note to the notes section of the additional top word(s) you added, and the resulting change in coverage (e.g., «by adding "and<cnjcoo> ↔ and" to the transducer, coverage went from 19.76% to 22.32%»)

In the Evaluation section on the wiki page, add the following:

  • Total number of stems in the transducer. You can use the following method, or count the stems manually.
    lexd -x apertium-xyz.xyz.lexd > /dev/null (then add the counts for the relevant individual lexicons)
  • Current coverage over your combined corpus
  • The current list of top unknown words returned by apertium-regtest test -c .-morph
  • Number of tests that pass in each test
    • The test files should have at least half of the tests passing
    • The commonwords-morph-*.txt files should have at least 1 passing test

Housekeeping

  1. Add yourself to the AUTHORS file.
  2. Make sure the COPYING file contains an open-source license to your liking (default should be GPL3).
  3. Add links to the transducer repo and wiki page to the list of resources you developed for your language on the language's page on this wiki.

Sanity checks before submitting

  1. Did you commit just the initial files created by bootstrap before you initialised or compiled the module? If not, start over with bootstrapping, being sure copy over any files you've changed. Or use this method.
  2. Did you commit your updates to lexd and twol files? And the txt test files?
  3. Do you have at least 50 tests in the main tests file? Do at least half of them match gold via apertium-regtest test -c .-morph?
  4. Did you add everything asked for to the wiki page (evaluation, etc.) and your repo (e.g., files in your test/ directory).
  5. If you have trouble analysing or compiling, are all your tags and symbols (full alphabet) defined in your twol file?