Difference between revisions of "Morphological analyser"
(→See full contents of analyser) |
(→Counting lexicon entries) |
||
(20 intermediate revisions by 2 users not shown) | |||
Line 125: | Line 125: | ||
V-Stems(1) [<v>:] V-Stems(2): | V-Stems(1) [<v>:] V-Stems(2): | ||
− | + | V-Prefixes(1) V-Stems(1) [<v>:] V-Stems(2): V-Prefixes(2) | |
− | LEXICON V-Prefixes | + | LEXICON V-Prefixes(2) |
− | <rep>: | + | :re> <rep>: |
− | <rev>: | + | :un> <rev>: |
LEXICON V-Tenses | LEXICON V-Tenses | ||
Line 139: | Line 139: | ||
LEXICON V-Stems(2) | LEXICON V-Stems(2) | ||
− | + | tie:tie <tv> | |
− | + | start:start <tv> | |
</pre> | </pre> | ||
Line 151: | Line 151: | ||
==== another example ==== | ==== another example ==== | ||
− | Here's a better example, showing (productive, inflectional) gender agreement on verbs in Avar. | + | Here's a better example, showing (productive, inflectional) gender agreement on verbs in Avar. Note the other way of matching prefixes to tags (the <code>V-Gender</code> lexicon as compared to <code>V-Prefixes</code> in English). |
<pre> | <pre> | ||
Line 191: | Line 191: | ||
See [[Morphological analyser/Exercises]]. | See [[Morphological analyser/Exercises]]. | ||
− | The work we did on this in class is available on Swarthmore's github at [https://github.swarthmore.edu/Ling073- | + | The work we did on this in class is available on Swarthmore's github at [https://github.swarthmore.edu/Ling073-sp23/ling073-eng ling073-sp23/ling073-eng]. |
== Documentation == | == Documentation == | ||
− | [https://github.com/apertium/lexd/blob/master/Usage.md Documentation of <code>lexd</code>] is available. | + | '''[https://github.com/apertium/lexd/blob/master/Usage.md Documentation of <code>lexd</code>]''' is available. |
== Evaluation == | == Evaluation == | ||
Line 228: | Line 228: | ||
=== A long list of forms with known analyses === | === A long list of forms with known analyses === | ||
− | To test whether your analyser is analysing forms correctly, you can put your analyses into a <code>yaml</code> file and use <code>morph-test</code> or <code>[[apertium-quality|aq-morftest]]</code>: | + | To test whether your analyser is analysing forms correctly, you can |
− | + | # put your analyses or forms into a file in <code>test/</code> and run [[apertium-regtest]]. | |
− | or | + | # put your analyses into a <code>yaml</code> file and use <code>morph-test</code> (or <code>[[apertium-quality|aq-morftest]]</code>): |
− | + | #: <code>morph-test -csi xyz.yaml | most</code> | |
+ | #: or | ||
+ | #: <code>aq-morftest -csi xyz.yaml | most</code> | ||
=== Coverage over a corpus === | === Coverage over a corpus === | ||
Line 243: | Line 245: | ||
echo "^house<n><pl>$" | apertium -d . -f none xyz-gener | echo "^house<n><pl>$" | apertium -d . -f none xyz-gener | ||
This will return all forms currently being generated, e.g. <code>houses/housees</code> | This will return all forms currently being generated, e.g. <code>houses/housees</code> | ||
+ | |||
+ | But note that the next lab will be about [[morphological generator|morphological generation]]. | ||
=== Counting lexicon entries === | === Counting lexicon entries === | ||
Line 248: | Line 252: | ||
lexd -x apertium-xyz.xyz.lexd > /dev/null | lexd -x apertium-xyz.xyz.lexd > /dev/null | ||
− | Note that a number of these lexicons are probably related to your morphology. If you're counting "lexical entries", you should probably exclude the morphology ones. | + | Note that a number of these lexicons are probably related to your morphology. If you're counting "lexical entries", '''you should probably exclude the morphology ones'''. |
== The assignment == | == The assignment == | ||
− | This assignment will be due at the end of the day Friday of the | + | This assignment will be due at the end of the day Friday of the 6th week of class (this semester: '''23:59 on Friday, February 24th, 2023'''). |
This assignment is to develop a morphological analyser that implements a good deal of the basic morphology of your language. | This assignment is to develop a morphological analyser that implements a good deal of the basic morphology of your language. | ||
Line 258: | Line 262: | ||
# Bootstrap a transducer for your language using <code>apertium-init</code> (installed on the lab machines): | # Bootstrap a transducer for your language using <code>apertium-init</code> (installed on the lab machines): | ||
#: <code>apertium-init -a lexd --with-spellrelax --prefix=ling073 xyz</code> | #: <code>apertium-init -a lexd --with-spellrelax --prefix=ling073 xyz</code> | ||
− | # Create a new ''empty'' repo (that is, don't check the README option) in [https://github.swarthmore.edu/Ling073- | + | # Create a new ''empty'' repo (that is, don't check the README option) in [https://github.swarthmore.edu/Ling073-sp23/ the course's GitHub organisation] with the name <code>ling073-xyz</code> (with your language's code in place of <code>xyz</code>). Then add the SSH link as a remote origin in your initialised module and push the module to the GitHub repo: |
− | #: <code>git remote add origin git@github.swarthmore.edu:Ling073- | + | #: <code>git remote add origin git@github.swarthmore.edu:Ling073-sp23/ling073-xyz.git</code> |
#: <code>git push --set-upstream origin master</code> | #: <code>git push --set-upstream origin master</code> | ||
# Go into the new directory (<code>cd ling073-xyz</code>), initialise the module (<code>./autogen.sh</code>), and compile it (<code>make</code>). | # Go into the new directory (<code>cd ling073-xyz</code>), initialise the module (<code>./autogen.sh</code>), and compile it (<code>make</code>). | ||
Line 267: | Line 271: | ||
# Augment the commented section at the top of the <code>apertium-xyz.xyz.lexd</code> file with any tags you came up with during the [[Grammar documentation]] assignment that aren't there already. Provide a symbol, and a brief comment explaining what the symbol means. | # Augment the commented section at the top of the <code>apertium-xyz.xyz.lexd</code> file with any tags you came up with during the [[Grammar documentation]] assignment that aren't there already. Provide a symbol, and a brief comment explaining what the symbol means. | ||
# Add ''all'' the characters of your language's orthography to the <code>Alphabet</code> section of the <code>apertium-xyz.xyz.twol</code> file. You may need to add archiphonemes later. | # Add ''all'' the characters of your language's orthography to the <code>Alphabet</code> section of the <code>apertium-xyz.xyz.twol</code> file. You may need to add archiphonemes later. | ||
− | # Use the <code>[[ | + | # Use the <code>[[morphTests2regtest]]</code> script (installed on the lab machines) to create a set of test files in a subdirectory called <code>test/</code>. |
− | #* Commit | + | #* Commit these files to the git repo! |
− | #* (You can remove | + | #* (You can remove empty files if you like.) |
− | #* There should be at least 50 tests in this | + | #* There should be at least 50 tests in the "-morph" files in this directory. You can make sure you have enough by counting lines in the <code>*-morph-input.txt</code> files; this will count the number of tests in each file and also return a total: |
+ | #*: <code>wc -l test/*/*-morph-input.txt</code> | ||
=== The hard stuff === | === The hard stuff === | ||
Line 279: | Line 284: | ||
# '''Create a page on the wiki''' <code>Language/Transducer</code> that links to the code and has Evaluation and Notes sections. | # '''Create a page on the wiki''' <code>Language/Transducer</code> that links to the code and has Evaluation and Notes sections. | ||
#* In the Notes section, say what tests still don't work and why. | #* In the Notes section, say what tests still don't work and why. | ||
− | #* Add the page to the category [[:Category: | + | #* Add the page to the category [[:Category:Sp23_Transducers]]. |
=== Evaluation === | === Evaluation === | ||
Line 287: | Line 292: | ||
# Use <code>coverage-hfst</code> or <code>[[apertium-quality|aq-covtest]]</code> (as above) to see how many forms in your basic corpus are analysed, and what the top unknown forms are. | # Use <code>coverage-hfst</code> or <code>[[apertium-quality|aq-covtest]]</code> (as above) to see how many forms in your basic corpus are analysed, and what the top unknown forms are. | ||
#* Make note of the coverage at this point | #* Make note of the coverage at this point | ||
− | # Make a new <code> | + | # Make a new <code>txt</code> file in your tests directory with the top unanalysed words. |
− | # Figure out what the analyses of '''at least three''' of these words should be, using the resources you have available (grammar books, etc.), and | + | #* Name the file something like <code>commonwords-morph-input.txt</code> |
+ | #* For each analysis, just put an {{tag|unk}} tag (for "unknown") after each form. | ||
+ | #* You'll need to add these files to your <code>test/tests.json</code> file. See [[Apertium-regtest#Manually adding tests]] for more information on manually adding tests. | ||
+ | #* Don't forget to commit these files to your git repository! | ||
+ | # Figure out what the analyses of '''at least three''' of these words should be, using the resources you have available (grammar books, etc.), and add gold accordingly. | ||
# Add '''at least one''' of these analyses to your transducer so that the test passes. | # Add '''at least one''' of these analyses to your transducer so that the test passes. | ||
− | # Rerun <code> | + | # Rerun <code>apertium-regtest test</code> to see by how much your coverage improved. |
− | #* Add a note to the notes section of the additional top word(s) you added, and the resulting change in coverage (e.g., «by adding "{{morphTest|and{{tag|cnjcoo}}|and}}" to the transducer, coverage went from | + | #* Add a note to the notes section of the additional top word(s) you added, and the resulting change in coverage (e.g., «by adding "{{morphTest|and{{tag|cnjcoo}}|and}}" to the transducer, coverage went from 19.76% to 22.32%») |
In the Evaluation section on the wiki page, add the following: | In the Evaluation section on the wiki page, add the following: | ||
Line 297: | Line 306: | ||
*: <code>lexd -x apertium-xyz.xyz.lexd > /dev/null</code> (then add the counts for the relevant individual lexicons) | *: <code>lexd -x apertium-xyz.xyz.lexd > /dev/null</code> (then add the counts for the relevant individual lexicons) | ||
* Current coverage over your combined corpus | * Current coverage over your combined corpus | ||
− | * The current list of top unknown words returned by <code> | + | * The current list of top unknown words returned by <code>apertium-regtest test -c .-morph</code> |
− | * Number of tests that pass in each | + | * Number of tests that pass in each test |
− | ** The | + | ** The test files should have at least half of the tests passing |
− | ** The commonwords. | + | ** The <code>commonwords-morph-*.txt</code> files should have at least 1 passing test |
=== Housekeeping === | === Housekeeping === | ||
Line 309: | Line 318: | ||
=== Sanity checks before submitting === | === Sanity checks before submitting === | ||
# Did you commit ''just'' the initial files created by bootstrap ''before'' you initialised or compiled the module? If not, start over with bootstrapping, being sure copy over any files you've changed. Or use [[Removing binaries from transducer repo|this method]]. | # Did you commit ''just'' the initial files created by bootstrap ''before'' you initialised or compiled the module? If not, start over with bootstrapping, being sure copy over any files you've changed. Or use [[Removing binaries from transducer repo|this method]]. | ||
− | # Did you commit your updates to <code>lexd</code> and <code>twol</code> files? And the <code> | + | # Did you commit your updates to <code>lexd</code> and <code>twol</code> files? And the <code>txt</code> test files? |
− | # Do you have at least 50 tests in the main tests file? Do at least half of them | + | # Do you have at least 50 tests in the main tests file? Do at least half of them match gold via <code>apertium-regtest test -c .-morph</code>? |
− | # Did you add everything asked for to the wiki page (evaluation, etc.) and your repo (e.g., | + | # Did you add everything asked for to the wiki page (evaluation, etc.) and your repo (e.g., files in your <code>test/</code> directory). |
# If you have trouble analysing or compiling, are all your tags and symbols (full alphabet) defined in your twol file? | # If you have trouble analysing or compiling, are all your tags and symbols (full alphabet) defined in your twol file? | ||
Latest revision as of 13:26, 8 May 2023
Contents
Morphological transducers
A morphological transducer is just a directed graph. It consists of nodes (numbered below) and arcs (with labels), with a starting node (0 below) and an ending node (16 below).
You follow the arcs that are available from your input. The only acceptable paths are ones that start from starting node and end at the ending node. You may match your input to either side of the arc's label (separated by : above), and the other side is returned as output.
In the transducer above, the left side is the form and the right side is the analysis. If you match your input to the left side (the form), then your output will be the right side (the analysis)—this is morphological analysis. Likewise, if you follow the transducer by matching your input to the right side (the analysis) and output the left side (the form), then you are performing morphological generation.
An example of a complete path is w:w o:o l:l v:f e:<n> s:<pl>
. The left/form side of this spells wolves
and the right/analysis side of this spells wolf<n><pl>
. Mapping between one and the other is as simple as taking one as input and following the path—by outputting the other side of each arc, you will get the other as output!
Question: What are all the possible paths provided by this transducer?
The formalism we use (lexd)
Transducers are pretty cool, and quite efficient... for computers. Following paths by hand is tedious, and drawing a transducer for anything more complex than the example above is torture. See the transducer below for Tuvan.
This transducer provides the combinations of about 8 case marker, 5 possessive morphemes, and the plural marker for three Tuvan nouns.
An example is өг>{L}{A}р>{i}м>{D}{A}н
mapping to өг<n><pl><px1sg><abl>
, meaning "from my houses". The analysis side is clear to anyone familiar with tags (and knowing that "өг" means "house"). The form side is actually something that will get fixed by morphophonology, which we'll worry about later (for now: letters like {L}
can be realised in a variety of ways, and >
is used as a morpheme boundary); the actual orthographic form is өглеримден
.
Question: How can we quantify the complexity of this graph?
Fortunately, we don't have to draw this graph by hand. We can simply define the various sections of it and link them together with a straightforward formalism called lexd. A section of a lexd file that corresponds to the graph above looks like the following:
PATTERNS N-Stems [ <n>: ] [ <pl>:>{L}{A}р ]? Possession? Cases LEXICON N-Stems өг:өг # "yurt" аът:аът # "horse" ном:ном # "book" LEXICON Possession <px1sg>:>{i}м <px2sg>:>{i}ң <px3sp>:>{z}{I}{n} <px1pl>:>{i}в{I}с <px2pl>:>{i}ң{A}р LEXICON Cases <nom>: <gen>:>{N}{I}ң <acc>:>{N}I <dat>:>{G}{A} <loc>:>{D}{A} <abl>:>{D}{A}н <all>:>{J}е <all>:>{D}{I}в{A} # Dir/LR
Questions
- What is
#
doing? - What is
:
doing? - What are the
?
s doing? - How are the lists of parts (LEXICONs) combined into a whole?
- What is not mentioned in this code that is in the graph above?
- Can you match sections of the graph to sections of the code?
Additional nuances
Phonology
The symbols like {L} above will need to be realised as different characters in different context.
For any symbols in your language that will be realised in different ways in different environments, you'll want to set up such an "archiphoneme". Use an uppercase letter for something that just has different forms, and use a lowercase letter for something that is inserted or deleted (i.e., is sometimes realised as nothing).
For now, it will suffice to define all the ways in which each archiphoneme surfaces by making a list in your twol
file. This essentially allows all of the options to surface, which means you will be able to analyse incorrect forms as well as correct ones. Later, when you make a generator, you'll write rules to constrain where each of the symbols can occur.
Defining symbols
You'll want to define your archiphoneme symbols in the twol file, each with all its possible outputs.
So your twol file should contain an Alphabet
section, which lists all the characters of the alphabet, and then all the archiphonemes with all their realisations. You will also want the >
morpheme separator and some punctuation marks, all escaped. A condensed example for Tuvan follows:
Alphabet А Б В Г Д Е Ё Ж З И Й К Л М Н Ң О Ө П Р С Т У Ү Ф Х Ц Ч Ш Щ Ъ Ы Ь Э Ю Я а б в г д е ё ж з и й к л м н ң о ө п р с т у ү ф х ц ч ш щ ъ ы ь э ю я %{A%}:а %{A%}:е %{L%}:л %{L%}:н %{i%}:0 %{i%}:ы %{i%}:и %{i%}:у %{i%}:ү %. %- %>:0 ;
Starting point
You'll need at least one PATTERNS
section in your lexd file. Bootstrapping a new language module per the instructions will create this for you, but don't forget that it's a thing!
Morphology that isn't suffixes
You may remember from last week that we discussed that analyses are generally in the form stem + POS tags + subcategory tags + function tags. What if some of your functional morphology occurs before the stem?
You can certainly implement that as shown above, but there's a problem: your tags will occur in the middle of the analysis. That is, instead of something like do<v><tv><rep><prc> ↔ redoing, you'd get something like <rep>do<v><tv><prog> ↔ redoing. This is undesirable in terms of keeping track of which of your tags are for what.
Fortunately lexd offers a trick for handling such things! You can list the left and right side of a given LEXICON in different parts of a PATTERN, and the pieces will be matched. The following is a simple example (which assumes that the "re" prefix in English should be treated as productive and inflectional, which it probably shouldn't):
PATTERNS Verbs PATTERN Verbs V-Base V-Tenses PATTERN V-Base V-Stems(1) [<v>:] V-Stems(2): V-Prefixes(1) V-Stems(1) [<v>:] V-Stems(2): V-Prefixes(2) LEXICON V-Prefixes(2) :re> <rep>: :un> <rev>: LEXICON V-Tenses <prog>:>ing <past>:>{e}d LEXICON V-Stems(2) tie:tie <tv> start:start <tv>
Questions:
- What are the forms output by this transducer?
- (How many forms are there?)
- What are the numbers in
()
? - What part of the file adds the prefixes and what part of the file adds the prefix tags?
- What are the different
PATTERN
groups for?
another example
Here's a better example, showing (productive, inflectional) gender agreement on verbs in Avar. Note the other way of matching prefixes to tags (the V-Gender
lexicon as compared to V-Prefixes
in English).
PATTERNS Verbs PATTERN Verbs :V-Gender V-Stems(1) [<v>:] V-Stems(2): V-Tense V-Gender: LEXICON V-Tense <aor>:>уна LEXICON V-Gender <nt>:б> <m>:в> <f>:й> <pl>:р> LEXICON V-Stems(2) бицине:иц <tv> # "говорить"
The output analyses would be the following:
бицине<v><tv><aor><nt>:бицуна бицине<v><tv><aor><f>:йицуна бицине<v><tv><aor><m>:вицуна бицине<v><tv><aor><pl>:рицуна
In-class exercise
See Morphological analyser/Exercises.
The work we did on this in class is available on Swarthmore's github at ling073-sp23/ling073-eng.
Documentation
Documentation of lexd
is available.
Evaluation
This section lists various commands that will help you evaluate your progress / how well the analyser is functioning.
Individual forms
To test whether/how your analyser is analysing a form, you can run the following:
echo "form" | apertium -d /path/to/analyser/ xyz-morph
An example might be the following:
apertium-tyv$ echo өглеримден | apertium -d . tyv-morph ^өглеримден/өг<n><pl><px1sg><abl>$^./.<sent>$
You can also do it this way:
apertium-tyv$ echo өглеримден | hfst-proc tyv.automorf.hfst
This output means that for the form өглеримден
there is one analysis: өг<n><pl><px1sg><abl>
. A form with multiple analysis would have them separated by /
, like the following:
^өг/өг<n><nom>/өг<n><attr>/өг<n><nom>+э<cop><aor><p3><sg>$^./.<sent>$
A form with no analyses in the transducer will just return the form with an *
before it, like the following:
^өглеримнен/*өглеримнен$^./.<sent>$
See full contents of analyser
The following command outputs the full contents of the transducer. Note that it only does one cycle through the graph, which means for numbers you'll only get double digits. Without -c0
(or -c1
, etc.—to tell it how many times to cycle through) it will cycle indefinitely, which probably isn't what you want.
hfst-expand -c0 xyz.automorf.hfst
If all the numbers are annoying, you can do this to get only non-number contents:
hfst-expand -c0 xyz.automorf.hfst | grep -v '<num>'
Likewise, you can focus on a particular part of speech, e.g. <n>s:
hfst-expand -c0 xyz.automorf.hfst | grep '<n>'
A long list of forms with known analyses
To test whether your analyser is analysing forms correctly, you can
- put your analyses or forms into a file in
test/
and run apertium-regtest. - put your analyses into a
yaml
file and usemorph-test
(oraq-morftest
):-
morph-test -csi xyz.yaml | most
- or
-
aq-morftest -csi xyz.yaml | most
-
Coverage over a corpus
To test coverage over a corpus, you can use coverage-hfst
or aq-covtest
:
coverage-hfst xyz.corpus.basic.txt /path/to/xyz.automorf.hfst
aq-covtest xyz.corpus.basic.txt /path/to/xyz.automorf.bin
Generating forms
If you need to test how a form generates, you can do something like the following:
echo "^house<n><pl>$" | apertium -d . -f none xyz-gener
This will return all forms currently being generated, e.g. houses/housees
But note that the next lab will be about morphological generation.
Counting lexicon entries
You can run the following command to list lexicons and counts in them all:
lexd -x apertium-xyz.xyz.lexd > /dev/null
Note that a number of these lexicons are probably related to your morphology. If you're counting "lexical entries", you should probably exclude the morphology ones.
The assignment
This assignment will be due at the end of the day Friday of the 6th week of class (this semester: 23:59 on Friday, February 24th, 2023).
This assignment is to develop a morphological analyser that implements a good deal of the basic morphology of your language.
Getting set up
- Bootstrap a transducer for your language using
apertium-init
(installed on the lab machines):-
apertium-init -a lexd --with-spellrelax --prefix=ling073 xyz
-
- Create a new empty repo (that is, don't check the README option) in the course's GitHub organisation with the name
ling073-xyz
(with your language's code in place ofxyz
). Then add the SSH link as a remote origin in your initialised module and push the module to the GitHub repo:-
git remote add origin git@github.swarthmore.edu:Ling073-sp23/ling073-xyz.git
-
git push --set-upstream origin master
-
- Go into the new directory (
cd ling073-xyz
), initialise the module (./autogen.sh
), and compile it (make
).- If this is successful, you should have several "modes" available; run
apertium -d . -l
to see. - One mode should be an
xyz-morph
mode; this is your analyser. Check it by runningecho "houses" | apertium -d . xyz-morph
, which should give you a morphological analysis of the word "houses".
- If this is successful, you should have several "modes" available; run
- Integrate any comments I've provided to you on your grammar documentation page so that all of your morphTests are in good order. See the sanity checks at Grammar documentation#Sanity checks to check the main things.
- Augment the commented section at the top of the
apertium-xyz.xyz.lexd
file with any tags you came up with during the Grammar documentation assignment that aren't there already. Provide a symbol, and a brief comment explaining what the symbol means. - Add all the characters of your language's orthography to the
Alphabet
section of theapertium-xyz.xyz.twol
file. You may need to add archiphonemes later. - Use the
morphTests2regtest
script (installed on the lab machines) to create a set of test files in a subdirectory calledtest/
.- Commit these files to the git repo!
- (You can remove empty files if you like.)
- There should be at least 50 tests in the "-morph" files in this directory. You can make sure you have enough by counting lines in the
*-morph-input.txt
files; this will count the number of tests in each file and also return a total:-
wc -l test/*/*-morph-input.txt
-
The hard stuff
- Build your morphological transducer, adding all of the stems from your Grammar documentation assignment, categorised correctly, so that at least half of your tests pass. You'll need to build up the morphotactics too.
- If too many of your grammar points are too hard to implement at this point (e.g., require some rules to change some characters to other characters), then you can skip one or two of them and instead add more "easy" forms to your transducer.
- Alternatively "hard-code" some forms, but add a comment in the lexd file near the relevant forms indicating that they need further work, and mention it on the wiki page (see below).
- Also don't forget to clean up your grammar page. Don't forget to rerun the scraping script to get a fresh version of the tests file. If you're happy with your grammar page and don't expect to change it much, feel free to "clean up" your tests file manually.
- Create a page on the wiki
Language/Transducer
that links to the code and has Evaluation and Notes sections.- In the Notes section, say what tests still don't work and why.
- Add the page to the category Category:Sp23_Transducers.
Evaluation
When you've finished getting half of your tests to pass.
Evaluate coverage on your corpus and add the one of the most frequent unanalysed words:
- Use
coverage-hfst
oraq-covtest
(as above) to see how many forms in your basic corpus are analysed, and what the top unknown forms are.- Make note of the coverage at this point
- Make a new
txt
file in your tests directory with the top unanalysed words.- Name the file something like
commonwords-morph-input.txt
- For each analysis, just put an <unk> tag (for "unknown") after each form.
- You'll need to add these files to your
test/tests.json
file. See Apertium-regtest#Manually adding tests for more information on manually adding tests. - Don't forget to commit these files to your git repository!
- Name the file something like
- Figure out what the analyses of at least three of these words should be, using the resources you have available (grammar books, etc.), and add gold accordingly.
- Add at least one of these analyses to your transducer so that the test passes.
- Rerun
apertium-regtest test
to see by how much your coverage improved.- Add a note to the notes section of the additional top word(s) you added, and the resulting change in coverage (e.g., «by adding "and<cnjcoo> ↔ and" to the transducer, coverage went from 19.76% to 22.32%»)
In the Evaluation section on the wiki page, add the following:
- Total number of stems in the transducer. You can use the following method, or count the stems manually.
-
lexd -x apertium-xyz.xyz.lexd > /dev/null
(then add the counts for the relevant individual lexicons)
-
- Current coverage over your combined corpus
- The current list of top unknown words returned by
apertium-regtest test -c .-morph
- Number of tests that pass in each test
- The test files should have at least half of the tests passing
- The
commonwords-morph-*.txt
files should have at least 1 passing test
Housekeeping
- Add yourself to the
AUTHORS
file. - Make sure the
COPYING
file contains an open-source license to your liking (default should be GPL3). - Add links to the transducer repo and wiki page to the list of resources you developed for your language on the language's page on this wiki.
Sanity checks before submitting
- Did you commit just the initial files created by bootstrap before you initialised or compiled the module? If not, start over with bootstrapping, being sure copy over any files you've changed. Or use this method.
- Did you commit your updates to
lexd
andtwol
files? And thetxt
test files? - Do you have at least 50 tests in the main tests file? Do at least half of them match gold via
apertium-regtest test -c .-morph
? - Did you add everything asked for to the wiki page (evaluation, etc.) and your repo (e.g., files in your
test/
directory). - If you have trouble analysing or compiling, are all your tags and symbols (full alphabet) defined in your twol file?