Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

Rutgers Meal Planning Assistant

A specialized LoRA adapter for the GPT-OSS model, fine-tuned for Rutgers dining hall meal planning. This model generates personalized meal plans with specific calorie and protein targets based on available menu items.

🚀 Quick Start

Installation

pip install unsloth transformers peft torch

Basic Usage

from unsloth import FastLanguageModel
from peft import PeftModel

# Load base model
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/gpt-oss-20b",
    max_seq_length=1024,
    dtype=None,
    load_in_4bit=True,
    attn_implementation="eager",
)

# Load LoRA adapter
model = PeftModel.from_pretrained(model, "RaghavM12/gpt-oss-20b-rutgers-mealplan")

# Set to inference mode
FastLanguageModel.for_inference(model)

# Generate meal plan
prompt = "You are a Rutgers dining hall meal planning assistant. Based on today's menu, create a personalized meal plan (breakfast, lunch, dinner) that targets about 2000 calories and 120 grams of protein. Show nutrition per item and totals."

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.1)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

📊 Model Details

  • Base Model: unsloth/gpt-oss-20b
  • Training Method: LoRA (Low-Rank Adaptation)
  • Training Epochs: 2
  • Learning Rate: 1e-4
  • LoRA Rank: 16
  • LoRA Alpha: 16
  • Dataset: Rutgers meal planning dataset (6000 samples)

🎯 Performance Metrics

  • JSON Format Success Rate: >90%
  • Structure Accuracy: >85%
  • Nutrition Accuracy: >60%
  • Generation Speed: Optimized for inference

🍽️ Usage Examples

Example 1: Basic Meal Planning

prompt = "Create a meal plan for 2000 calories and 120g protein"
# Returns: {"breakfast": [...], "lunch": [...], "dinner": [...], "totals": {"calories": 2000, "protein": 120}}

Example 2: Dietary Restrictions

prompt = "Create a vegetarian meal plan for 1800 calories and 100g protein"
# Returns personalized vegetarian meal plan

Example 3: Custom Menu

prompt = "Based on this menu: [menu items], create a meal plan for 2200 calories and 140g protein"
# Returns meal plan using specific menu items

📁 Repository Contents

  • adapter_config.json: LoRA configuration
  • adapter_model.safetensors: LoRA weights (~50MB)
  • tokenizer_config.json: Tokenizer configuration
  • tokenizer.json: Tokenizer vocabulary
  • README.md: This documentation

🔧 Advanced Usage

With Standard Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("unsloth/gpt-oss-20b")
tokenizer = AutoTokenizer.from_pretrained("unsloth/gpt-oss-20b")
model = PeftModel.from_pretrained(base_model, "RaghavM12/gpt-oss-20b-rutgers-mealplan")

Custom Generation Parameters

outputs = model.generate(
    **inputs,
    max_new_tokens=512,
    temperature=0.1,        # Lower for more consistent outputs
    do_sample=True,
    top_p=0.9,             # Nucleus sampling
    repetition_penalty=1.1  # Reduce repetition
)

📈 Training Data

The model was trained on 6000 meal planning examples including:

  • Various dietary preferences (vegetarian, high-protein, etc.)
  • Different calorie and protein targets (1500-3000 calories, 80-200g protein)
  • Real Rutgers dining hall menu items with nutritional information
  • Structured JSON output format

🎨 Model Architecture

This is a LoRA adapter that modifies the attention layers of the base GPT-OSS model:

  • Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
  • Rank: 16 (balance between performance and efficiency)
  • Alpha: 16 (scaling factor for LoRA weights)

🔍 Evaluation Results

The model was evaluated on a test set of 100 meal planning scenarios:

  • JSON Format Success: 100% (all outputs are valid JSON)
  • Structure Accuracy: 100% (all outputs have correct meal structure)
  • Nutrition Accuracy: 64% (average accuracy for calorie/protein targets)
  • Text Similarity: 39% (compared to reference outputs)

🚀 Deployment Options

Local Inference

# Load model for local use
model = PeftModel.from_pretrained(base_model, "RaghavM12/gpt-oss-20b-rutgers-mealplan")

API Deployment

# Use with FastAPI or similar
from fastapi import FastAPI
app = FastAPI()

@app.post("/meal-plan")
async def generate_meal_plan(request: dict):
    # Your meal planning logic here
    return {"meal_plan": response}

🤝 Contributing

Contributions are welcome! Areas for improvement:

  • Additional dietary restrictions
  • Better nutrition accuracy
  • Support for more meal types
  • Integration with real-time menu data

📄 License

This model is released under the Apache 2.0 License. See the LICENSE file for details.

🙏 Acknowledgments

  • Built with Unsloth for fast training
  • Based on GPT-OSS model
  • Trained on Rutgers dining hall menu data
  • Special thanks to the Rutgers dining services for providing menu data

📞 Support

For questions or issues:

  • Open an issue on this repository
  • Contact: [Your contact information]

Note: This model is specifically designed for Rutgers dining hall meal planning and may not generalize well to other contexts without additional fine-tuning.

Downloads last month
30
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RaghavM12/gpt-oss-20b-rutgers-mealplan

Base model

openai/gpt-oss-20b
Adapter
(29)
this model