Random Text Generation with Pre-trained Language Models

  • Share this:

Code introduction


This function uses the Huggingface Transformers library to generate random text based on a given prompt using a pre-trained language model. It first randomly selects a pre-trained model, then uses the model and the corresponding tokenizer to generate text of a specified length.


Technology Stack : Huggingface Transformers, AutoModelForCausalLM, AutoTokenizer

Code Type : Function

Code Difficulty : Intermediate


                
                    
import random
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer

def generate_random_text(prompt, length=50):
    # Initialize a random language model
    model_name = random.choice(["gpt2", "gpt-neo-2.7B", "t5-small", "t5-base"])
    model = AutoModelForCausalLM.from_pretrained(model_name)
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    
    # Generate text based on the prompt
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs, max_length=length)
    
    # Decode the generated text
    generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
    
    return generated_text