Skip to main content

Lelapa-X-GLOT-isiZulu

Lelapa-X-Glot (isiZulu)

Model Details 1

Model Details

Basic information about the model: Review section 4.1 of the model cards paper.

OrganizationLelapa AI
ProductVulavula
Model date30 October 2024
FeatureTranslation
LangMultilingual
DomainNews, Religion, General
Model NameLelapa-X-GLOT (isiZulu)
Model version1.0.0
Model TypeFine-Tuned Proprietary Model

Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: Proprietary Fine-tuning of a Base Model on Text Data

License: Proprietary

Contact: info@lelapa.ai

Intended use

Use cases that were envisioned during development: Review section 4.2 of the model cards paper.

Primary intended uses

This machine translation model is primarily targeted towards translation of low-resource languages. It allows for single sentence translation for isiZulu.

Primary intended users

The Translation model can be used by :

  • Machine Translation community
  • Researchers

Out-of-scope use cases

All languages and domains outside of isiZulu Translation. The model is not intended to be used for full document translation. The model was trained with input lengths not exceeding 512 tokens, therefore translating longer sequences might result in quality degradation.

Factors

Factors include linguistic, cultural, and contextual variables as well as technical aspects that could influence the performance and utility of the translation model. Refer to Section 4.3 of the model cards paper for further guidance on relevant factors.

Relevant Factors

  • Languages and Dialects:
    The model is trained to handle a vast number of languages, each with unique grammar, vocabulary, and syntactic structures. However, variations in dialects, regional language use, and idiomatic expressions could impact translation accuracy. Factors such as linguistic overlap or the relative scarcity of high-quality data for certain languages are also significant.
  • Cultural Context:
    The model may struggle with cultural references, idiomatic phrases, and terms with specific cultural meanings. Factors related to cultural context, like the model's ability to maintain the tone and cultural relevance of a translation, are crucial.
  • Technical Attributes and Instrumentation:
    Performance might differ based on computational resources, such as the quality of hardware or the use of GPU acceleration. Additionally, variations in the source and target language quality, sentence length, and the complexity of linguistic structures are essential factors to consider.

Evaluation Factors

Model performance is evaluated using metrics like BLEU and CHF1 scores.

Metrics

Model performance measures

The model is evaluated using Character F1 Score (CHF1), an automatic metric that provides a balanced measure of precision and recall at the character level. As with the standard F1 score, the CHF1 is the harmonic mean of character-level Precision and Recall.

  • Precision at the character level measures how well the model avoids including incorrect or extraneous characters in its translations. It reflects the model's ability to produce accurate and clean outputs that match the reference translation as closely as possible.
  • Recall at the character level indicates how well the model captures all the correct characters from the reference translation, ensuring completeness and accuracy in character representation.

A higher Character F1 Score means the model is effective in maintaining precise and complete character sequences, indicating a good balance between avoiding unnecessary character additions and capturing all relevant characters.

CHF1 score: Testing on various domains in isiZulu

Decision thresholds

No decision thresholds have been specified

Evaluation data

All referenced datasets would ideally point to any set of documents that provide visibility into the source and composition of the dataset. Evaluation datasets should include datasets that are publicly available for third-party use. These could be existing datasets or new ones provided alongside the model card analyses to enable further benchmarking.

Datasets

  • Autshumato English-isiZulu Parallel Corpora
  • Autshumato Multilingual Word and Phrase Translations
  • Umsuka English - isiZulu Parallel Corpus:
  • The South African Gov-ZA multilingual corpus
  • The Vuk'uzenzele South African Multilingual Corpus
  • Proprietary Datasets

Motivation

These datasets have been selected because they are open-source, high-quality, and cover the targeted languages . These help to capture interesting cultural and linguistic aspects that would be crucial in the development process for better performance.

Training data

The model was trained on parallel multilingual data from a variety of open-source sources.

Quantitative analyses

Quantitative analyses should be disaggregated, that is, broken down by the chosen factors. Quantitative analyses should provide the results of evaluating the model according to the chosen metrics, providing confidence interval values when possible.

Review section 4.7 of the model cards paper.

Unitary results (isiZulu)

DomainLelapa-X-GLOT (CHF1 Score)
Government40.97
News57.73

Ethical considerations

This section is intended to demonstrate the ethical considerations that went into model development, surfacing ethical challenges and solutions to stakeholders. The ethical analysis does not always lead to precise solutions, but the process of ethical contemplation is worthwhile to inform on responsible practices and next steps in future work: Review section 4.8 of the model cards paper.
All data had been anonymised so the model does not contain any personal information.

Caveats and recommendations

This section should list additional concerns that were not covered in the previous sections. Review section 4.9 of the model cards paper.
Additional caveats are outlined extensively in our Terms and Conditions.

Additional caveats are outlined extensively in our Terms and Conditions.