Fine-tuning This model for my dataset consisting of question and SQL.

#44
by PratikJadon - opened

For the training purpose i am using this prompt:
input_prompt = f"""Task Generate a SQL query to answer the question using the given Tenant ID.
Tenant ID: {tenant}
[QUESTION]{q}[/QUESTION]

SQL Query
[SQL][/SQL]"""

label = f"""Task Generate a SQL query to answer the question using the given Tenant ID.
Tenant ID: {tenant}
[QUESTION]{q}[/QUESTION]
SQL Query
[SQL]{sql}[/SQL]"""

inputs.append(input_prompt)
labels.append(label)

# Tokenize the inputs
model_inputs = tokenizer(inputs, max_length=512, truncation=True, padding="max_length", return_tensors="pt")
model_labels = tokenizer(labels, max_length=512, truncation=True, padding="max_length", return_tensors="pt")

model_inputs["labels"] = model_labels["input_ids"]

I am using these inputs prompt and label and then using their tokens to train my model but its not getting me any accuracy.

Sign up or log in to comment