Llama 3.2 1B JSON Extractor
A fine-tuned version of Llama 3.2 1B Instruct specialized for generating structured JSON outputs with high accuracy and schema compliance.
π― Model Description
This model has been fine-tuned to excel at generating valid, well-structured JSON objects based on Pydantic model schemas. It transforms natural language prompts into properly formatted JSON responses with remarkable consistency.
π Performance
π Dramatic Improvement in JSON Generation:
- JSON Validity Rate: 20% β 92% (over 70% improvement)
- Schema Compliance: Near-perfect adherence to Pydantic model structures
- Generalization: Successfully handles completely new, unseen Pydantic model classes
π§ Training Details
- Base Model: meta-llama/Llama-3.2-1B-Instruct
- Fine-tuning Method: LoRA (Low-Rank Adaptation) with Unsloth
- Training Data: Synthetic dataset with 15+ diverse Pydantic model types
- Training Epochs: 15
- Batch Size: 16 (with gradient accumulation)
- Learning Rate: 1e-4
ποΈ Supported Model Types
The model can generate JSON for 15+ different object types including:
- Educational: Course, Resume, Events
- Entertainment: FilmIdea, BookReview, GameIdea
- Business: TShirtOrder, Recipe, House
- Characters & Gaming: FictionalCharacter, GameArtifact
- Travel: Itinerary
- Science: SollarSystem, TextSummary
- And many more...
π― Key Features
- High JSON Validity: 92% success rate in generating valid JSON
- Schema Compliance: Follows Pydantic model structures precisely
- Strong Generalization: Works with new, unseen model classes
- Consistent Output: Reliable structured data generation
- Lightweight: Only 1B parameters for efficient deployment
π Training Data
The model was fine-tuned on a synthetic dataset containing thousands of examples across diverse domains:
- Character creation and game development
- Business and e-commerce objects
- Educational and professional content
- Entertainment and media descriptions
- Scientific and technical data structures
π Links
- GitHub Repository: LLM_FineTuning_4JsonCreation
- Base Model: meta-llama/Llama-3.2-1B-Instruct
π License
This model is released under the Apache 2.0 license.
π Acknowledgments
- Meta for the base Llama 3.2 model
- Unsloth for efficient fine-tuning framework
- Hugging Face for model hosting and ecosystem
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for MathBite/llama1b_finetuned_json_creation
Base model
meta-llama/Llama-3.2-1B-Instruct