Developed by: Pramod Koujalagi

SmolLM2-360M-Instruct-Text-2-JSON

A fine-tuned version of SmolLM2-360M-Instruct-bnb-4bit specialized for parsing unstructured calendar event requests into structured JSON data.

Model Description

This model is fine-tuned on SmolLM2-360M-Instruct-bnb-4bit using QLoRA to extract structured calendar event information from natural language text. It identifies and structures key scheduling entities like action, date, time, attendees, location, duration, recurrence, and notes.

📦 Example Usage

You can use the SmolLM2-360M-Instruct-Text-2-JSON model to parse natural language event descriptions into structured JSON format.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import json

# Load model and tokenizer
model_name = "pramodkoujalagi/SmolLM2-360M-Instruct-Text-2-JSON"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

def parse_calendar_event(text):
    # Format the prompt
    formatted_prompt = f"""<|im_start|>user
Extract the relevant event information from this text and organize it into a JSON structure with fields for action, date, time, attendees, location, duration, recurrence, and notes. If a field is not present, return null for that field.

Text: {text}
<|im_end|>
<|im_start|>assistant
"""

    # Generate response
    inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)
    with torch.no_grad():
        outputs = model.generate(
            **inputs,
            max_new_tokens=512,
            do_sample=True,
            temperature=0.1,
            top_p=0.95,
            pad_token_id=tokenizer.eos_token_id
        )

    # Process response
    output_text = tokenizer.decode(outputs[0], skip_special_tokens=False)
    response = output_text.split("<|im_start|>assistant\n")[1].split("<|im_end|>")[0].strip()

    # Return formatted JSON
    parsed_json = json.loads(response)
    return json.dumps(parsed_json, indent=2)

# Example input
event_text = "Plan an exhibition walkthrough on 15th, April 2028 at 3 PM with Harper, Grace, and Alex in the art gallery for 1 hour, bring bag."

# Output
print("Prompt:")
print(event_text)
print("\nModel Output:")
print(parse_calendar_event(event_text))

Output

Prompt:
Plan an exhibition walkthrough on 15th, April 2028 at 3 PM with Harper, Grace, and Alex in the art gallery for 1 hour, bring bag.

Model Output:
{
  "action": "Plan an exhibition walkthrough",
  "date": "15/04/2028",
  "time": "3:00 PM",
  "attendees": [
    "Harper",
    "Grace",
    "Alex"
  ],
  "location": "art gallery",
  "duration": "1 hour",
  "recurrence": null,
  "notes": "Bring bag"
}

Resources for more information:

Use Cases

  • Calendar application integration
  • Personal assistant scheduling systems
  • Meeting summarization tools
  • Email processing for event extraction

Training Details

Training Data

The model was trained on a custom dataset consisting of 1,149 examples (1,034 training, 115 validation) of natural language event descriptions paired with structured JSON outputs. The dataset includes a wide variety of event types, date/time formats, and varying combinations of fields.

Training Procedure

  • Fine-tuning method: QLoRA (Quantized Low-Rank Adaptation)
  • LoRA configuration:
    • Rank: 64
    • Alpha: 32
    • Target modules: All key model components
    • Rank-stabilized LoRA: Enabled
  • Training hyperparameters:
    • Batch size: 8 (2 per device × 4 gradient accumulation steps)
    • Learning rate: 2e-4 with cosine scheduler
    • Epochs: 3
    • Weight decay: 0.01
    • Optimizer: AdamW (8-bit)
    • Gradient checkpointing: Enabled
  • Training time: ~15 minutes
  • Hardware used: T4 GPU

Training Metrics

  • Final training loss:
  • Final validation loss:
  • Validation perplexity: 1.2091
Downloads last month
1,764
GGUF
Model size
409M params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 2 Ask for provider support

Model tree for pramodkoujalagi/SmolLM2-360M-Instruct-Text-2-JSON

Space using pramodkoujalagi/SmolLM2-360M-Instruct-Text-2-JSON 1