Llama 3.2 1B JSON Extractor

A fine-tuned version of Llama 3.2 1B Instruct specialized for generating structured JSON outputs with high accuracy and schema compliance.

🎯 Model Description

This model has been fine-tuned to excel at generating valid, well-structured JSON objects based on Pydantic model schemas. It transforms natural language prompts into properly formatted JSON responses with remarkable consistency.

πŸ“Š Performance

πŸš€ Dramatic Improvement in JSON Generation:

  • JSON Validity Rate: 20% β†’ 92% (over 70% improvement)
  • Schema Compliance: Near-perfect adherence to Pydantic model structures
  • Generalization: Successfully handles completely new, unseen Pydantic model classes

πŸ”§ Training Details

  • Base Model: meta-llama/Llama-3.2-1B-Instruct
  • Fine-tuning Method: LoRA (Low-Rank Adaptation) with Unsloth
  • Training Data: Synthetic dataset with 15+ diverse Pydantic model types
  • Training Epochs: 15
  • Batch Size: 16 (with gradient accumulation)
  • Learning Rate: 1e-4

πŸ—οΈ Supported Model Types

The model can generate JSON for 15+ different object types including:

  • Educational: Course, Resume, Events
  • Entertainment: FilmIdea, BookReview, GameIdea
  • Business: TShirtOrder, Recipe, House
  • Characters & Gaming: FictionalCharacter, GameArtifact
  • Travel: Itinerary
  • Science: SollarSystem, TextSummary
  • And many more...

🎯 Key Features

  • High JSON Validity: 92% success rate in generating valid JSON
  • Schema Compliance: Follows Pydantic model structures precisely
  • Strong Generalization: Works with new, unseen model classes
  • Consistent Output: Reliable structured data generation
  • Lightweight: Only 1B parameters for efficient deployment

πŸ“š Training Data

The model was fine-tuned on a synthetic dataset containing thousands of examples across diverse domains:

  • Character creation and game development
  • Business and e-commerce objects
  • Educational and professional content
  • Entertainment and media descriptions
  • Scientific and technical data structures

πŸ”— Links

πŸ“„ License

This model is released under the Apache 2.0 license.

πŸ™ Acknowledgments

  • Meta for the base Llama 3.2 model
  • Unsloth for efficient fine-tuning framework
  • Hugging Face for model hosting and ecosystem
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for MathBite/llama1b_finetuned_json_creation

Finetuned
(964)
this model