--- license: apache-2.0 datasets: - Anthropic/hh-rlhf language: - en pipeline_tag: text-generation --- # GPT-2 Medium Fine-Tuned on Anthropic-hh Dataset This repository houses a GPT-2 Medium model fine-tuned on the Anthropic-hh dataset. The fine-tuning process involved masking Human's utterances, with the loss computed exclusively on the Assistant's responses. ## Model Information - **Base Model:** GPT-2 Medium - **Training Data:** Anthropic-hh dataset - **Fine-Tuning Approach:** Supervised fine-tuning with a focus on Assistant's responses. ## How to Use ```python from transformers import GPT2LMHeadModel, GPT2Tokenizer # Load tokenizer and model tokenizer = GPT2Tokenizer.from_pretrained("RaushanTurganbay/GPT2_instruct_tuned") model = GPT2LMHeadModel.from_pretrained("RaushanTurganbay/GPT2_instruct_tuned") # Generate responses prompt = "Your input prompt here" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=150, num_return_sequences=1) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```