Random Translation Generation with Transformer Model

  • Share this:

Code introduction


This function uses the Transformer model from the Fairseq library to perform text translation. It first loads the pre-trained model and dictionary, then encodes the source text, translates it using the model, and finally decodes the translated tokens into text.


Technology Stack : Fairseq, TransformerModel, Dictionary

Code Type : Translation function

Code Difficulty : Intermediate


                
                    
import random
import torch
from fairseq.models.transformer import TransformerModel
from fairseq.data.encoders import Dictionary

def generate_random_translation(source_text, target_language):
    """
    This function generates a random translation of the given source text to the target language using a pre-trained
    Transformer model from the Fairseq library.
    """
    # Load the pre-trained model and dictionary
    model_path = "path_to_pretrained_model"  # Replace with the path to your pre-trained model
    dictionary = Dictionary.load(model_path + "/dict.txt")
    model = TransformerModel.from_pretrained(model_path)

    # Encode the source text
    src_tokens = dictionary.encode_line(source_text, add_if_not_exist=False)
    src_tokens = torch.tensor(src_tokens).unsqueeze(0)

    # Generate the translation
    with torch.no_grad():
        translated_tokens = model.translate(src_tokens)

    # Decode the translated tokens to text
    target_text = dictionary.decode_line(translated_tokens[0], skip_special_tokens=True)

    return target_text