Difference between revisions of "Structural transfer"

From LING073
Jump to: navigation, search
(Examples of implemented Apertium transfer systems)
(Examples of implemented Apertium transfer systems)
Line 134: Line 134:
* [https://github.com/apertium/apertium-uzb-kaa/blob/master/apertium-uzb-kaa.uzb-kaa.rtx uzb-kaa] (apertium): Uzbek-to-Qaraqalpaq, lots of macros
* [https://github.com/apertium/apertium-uzb-kaa/blob/master/apertium-uzb-kaa.uzb-kaa.rtx uzb-kaa] (apertium): Uzbek-to-Qaraqalpaq, lots of macros
* [https://github.com/apertium/apertium-br-fr/blob/rtx/apertium-br-fr.br-fr.rtx br-fr] (apertium): Breton-to-French, uses lemma lists and weighting
* [https://github.com/apertium/apertium-br-fr/blob/rtx/apertium-br-fr.br-fr.rtx br-fr] (apertium): Breton-to-French, uses lemma lists and weighting
{{comment|under construction}}
==== Writing rules ====
==== Writing rules ====

Revision as of 11:16, 26 April 2021


The basic idea of structural transfer in RBMT

The idea of structural transfer in RBMT is to deal with the order and tag differences encountered in translation between two languages

The arrows between the two tagged levels represent where structural transfer is needed. Colour coding shows [rough] correspondences.

How structural transfer works in Apertium

Transfer takes the output of the biltrans mode (bilingual translation), matches series of words based on patterns you define, and performs operations on and output those things. It allows you to change the order of words, change tags, etc.

Syntactic Structures and Parsing

The way Apertium's recursive structural transfer system works is to parse and combine phrases, and then to output each parsed phrase. Rules specify what words or phrases are parsed together into phrases and how they're output.

Example: English-to-Kyrgyz

The mapping between phrase-structure trees of the Kyrgyz and English sentence above ("I did not see the houses")

In the English-to-Kyrgyz example for "I did not see the houses", in both languages a noun (<n>) is parsed into an NP (noun phrase), and an NP can combine with a determiner (<det>) to form a DP (determiner phrase), although Kyrgyz doesn't have a definite article. They can also both form a DP from just a pronoun. These rules can be written as follows:

NP -> %n { %1 } ;
DP -> det %NP { %2 } |
      det %NP { 1 _ %2 } |
      %prn { %1 } ;

The beginning of each rule describes what kind of phrase to build (NP and DP, respectively). The part after the arrow (->) and the {}s is the material to parse, in this case specified as POS tags (<n>, <det>) or other phrases (NP). The part inside the {}s shows the order to output the elements in; in the last rule, the first element is not output. The %s on the input elements shows what element to copy features from to the phrase level (different features can come from different elements; see VP example below), and the %s on the output elements shows which element to copy missing features to from the phrase node on output. The _ is simply a blank (roughly a space) in the output.

To decide between the first two DP rules, either specifying lemmas, weighting, or using conditionals would need to be used (specifying lemmas is discussed below, the rest are discussed later). By default, the first one will be applied, which in this case is the one we want.

Similarly to how nominals are dealt with above, verbs (<v>, <vblex>, etc.) and auxiliaries (<vaux>) can combine in various ways to combine into a vP (first-level verb phrase) in both languages, although when English uses "do" auxiliaries, Kyrgyz does not use an auxiliary, and Kyrgyz encodes the equivalent of a "not" adverbial as <neg> on the main verb.

vP -> do@AuxP.$tense.$polarity v.*.inf.$lemh.$transitivity { %2 } ;

Here the .$tense, .$polarity, .$lemh, and .$transitivity specifications tell which element to get each of those named attributes (features) from, acting together as % (which will get anything not specified with a $ attribute). Specifying .*.inf requires the <v> to have also an <inf> tag after any other tags, and specifying do@ requires the AuxP to have a lemma of do, which would have been obtained in the parse using a rule like this:

AuxP -> %vbdo.$lemh/sl not@adv [$polarity=neg] { } | 
        %vbdo.$lemh/sl { } ;

Here the .$lemh/sl part ensures the AuxP gets its lemma from the source language (SL) lemma. This AuxP rule also matches sequences of a do auxiliary and not<adv> and sets the polarity of the AuxP to neg upon such a match.

The first difference in ordering is that a vP and a DP, while both being parsed into a VP (a top-level verb phrase), occur in different orders. If translating from English to Kyrgyz, a rule would need to parse a sequence of vP DP into VP, but output the components in the reverse order, and set the case of the DP to <acc> (how direct objects are marked in Kyrgyz). This rule would look something like this:

VP -> %vP DP { 2[case=acc] _ %1 } ;

In both languages, a DP and a VP combine to form an S (sentence) in the same order:

S -> DP.$person.$number VP { 1 _ %2 } ;

This rule gets the person and number attributes from the DP, and makes sure the verb (2) gets those attributes on output (using %).

In addition to these rules, each POS and phrase type needs an output pattern. These patterns would look something like this for this English-to-Kyrgyz example:

S: _.person.number ;
DP: _.person.number.possession.case ;
NP: _.number.possession.case ;
n: _.number.possession.case ;
prn: % ;
VP: _.transitivity.polarity.tense.person.number ;
vP: _.transitivity.polarity.tense.person.number ;
v: _.transitivity.polarity.tense.person.number ;
vaux: _.polarity.tense.person.number ;
vbdo: _.polarity.tense.person.number ;
AuxP: _.polarity.tense.person.number ;
det: %;

These output patterns define what order to arrange attributes (sets of tags, which also need to be defined) in a particular order. The _ represents a lemma followed by the main POS. Attributes are defined as follows:

person = (PD p3) p1 p2 p3 PD ;
number = (ND sg) sg pl sp ND ;
polarity = (PolD "") neg PolD ;

For the most part these definitions are simple lists, but the first element in ()s defines a filler in parsing if no other information is available (e.g., ND, for "Number to be Determined") and the default to replace it with in output if no value has been set ("" for empty).

This example is essentially complete for this sentence, and can be viewed in its entirety or tested in this repository. To deal with other types of things related to this example or otherwise needed between these languages, additional patterns, modifications to the existing patterns, or advanced features would need to be used.

To see what the transfer stage is doing, you can do the following:

$ echo "I did not see the houses." | apertium -d . eng-kir-transfer

To see the parse tree (before things are adjusted for output), you can get the output of lexical selection and feed it into rtx-proc -T:

$ echo "I did not see the houses" | apertium -d . eng-kir-lex | rtx-proc -T eng-kir.rtx.bin 

In this case, the output looks like the following:


Example: English-to-Spanish

The mapping between phrase-structure trees of "in the big beautiful houses" (English) and "en las casas largas y bonitas" (Spanish)

The English-to-Spanish phrase pair "in the big beautiful houses" = "en las casas largas y bonitas" is shown in the image.

This phrase is a good example to walk through together in class or on your own. The following will need to be accounted for:

  • The order of AdjP and NP within DP,
  • The number and gender agreement on det and adjs,
  • The addition of "y" between two adjectives in Spanish as compared to English.

Some things to note

Some advanced features include

  • weighting
  • conditionals
  • macros

These let you do some useful things that aren't possible otherwise. See the documentation or ask your prof or TA about it :)

Examples of implemented Apertium transfer systems

Some examples are available:

  • eng-spa (in-class): a basic example from class showing how to transfer adjective+noun (etc.) from English to Spanish ("big houses → casas largas": number and gender agreement and reordering).
  • eng-spa: a more extensive English-to-Spanish example.
  • eng-kir (apertium): English-to-Kyrgyz, lots of conditionals
  • kaz-kir (apertium): Kazakh-to-Kyrgyz, lots of macros
  • uzb-kaa (apertium): Uzbek-to-Qaraqalpaq, lots of macros
  • br-fr (apertium): Breton-to-French, uses lemma lists and weighting

Writing rules

Fairly extensive documentation is available on the Apertium wiki:


Scrape a mini test corpus

  1. First make sure you have scrapeTransferTests. Test that running scrapeTransferTests gives you information on using the tool. If not, clone the tools repo (or git pull to update it, if you already have it cloned from other assignments) and run sudo make. Test again.
  2. Scrape the transferTests from your contrastive grammar page into a small parallel corpus. E.g., scrapeTransferTests -p abc-xyz "Language1_and_Language2/Contrastive_Grammar" will result in an abc.tests.txt and xyz.tests.txt file that contain the respective sides of any transferTests on your contrastive grammar page specified as being for abc-to-xyz translation.
  3. Add these two files to your bilingual corpus repository and add mention of their origin (the wiki page) to the MANIFEST file.


WER or word error rate is a measure of how different two texts are. You will want to know how different the translation your translation pair performs (the "test translation") is from the known good translation of phrases in your parallel corpus (the "reference translation").

PER (position-independent error rate) is the same measurement, just not sensitive to position in a phrase. I.e., a correct translation of every word but in an entirely wrong word order will give you high (bad) WER but low (good) PER.

To test WER and PER:

  1. First make sure you have apertium-eval-translator. Test that running apertium-eval-translator gives you information on using the tool. If not, clone the tools repo (or git pull to update it, if you already have it cloned from other assignments) and run make.
  2. You need two files: one test translation, and one reference translation. The reference translation is the parallel text in your corpus, e.g. abc.tests.txt. To get a test translation, run the source text through apertium and direct the output into a new file, e.g. cat xyz.tests.txt | apertium -d . xyz-abc > xyz-abc.tests.txt. You should add the [final] test translation to your repository.
  3. The following command should then give you WER and PER measures and some other useful numbers:
    • apertium-eval-translator -r abc.tests.txt -t xyz-abc.tests.txt

The assignment

This assignment is early in week 13 (this semester, noon on Monday, May 3, 2021).

Getting set up

  1. Add a page to the wiki called Language1_and_Language2/Structural_transfer, linking to it from the main page on the language pair.
    • Put the page in the category Category:Sp21_StructuralTransfer and the categories for the two languages.
    • Perform WER, PER, and coverage tests on your short sentences corpus, and add this in to a pre-evaluation section.

Adding stems

  1. Add all the words for the transfer tests (from the last assignment) to analyse to bilingual dictionary.
    • And make sure both analysers can analyse all sentences correctly, which includes adding the words to the relevant monolingual dictionaries as necessary.

Write structural transfer rules

  1. Implement at least one item from your contrastive grammar.
    • Each person in each group should implement at least one item for the direction that translates into the language that they have been primarily working with. The same item does not need to be used for each direction.
    • If the contrastive grammar item only involves relabelling or reordering tags within the same form, then please do at least two items.

Wrapping up

  1. Add to your structural transfer wiki page:
    • Add at least one example sentence for each item you implement. Show the outputs of the following modes for your translation system: tagger, biltrans, transfer, and the pair itself (abc-xyz).
    • Perform WER, PER, and coverage tests again, and add into a post-evaluation section on the wiki page.