Datasets:
metadata
license: apache-2.0
language:
- en
tags:
- text-to-sql
- text to sql
pretty_name: Presto/Athena Text to SQL Dataset
size_categories:
- 1K<n<10K
I created this dataset using sqlglot to auto-convert the Spider and Wikisql datasets to Presto syntax, along with running some regex's for additional cleanup.
An example use case is fine-tuning an existing model to respond with Presto/Athena text-to-sql, if it performs well at standard SQL syntax used by the major text to sql training datasets.
Example of fine-tuning using this dataset (in this case for Mystral 7b Instruct):
import json
import pandas as pd
from datasets import Dataset
def read_jsonl(file_path):
data = []
with open(file_path, 'r', encoding='utf-8') as file:
for line in file:
json_data = json.loads(line.strip())
data.append(json_data)
return data
# Read the train and validation files
train_data = read_jsonl('training_data/train.jsonl') # use your own path to the training/validation data here
valid_data = read_jsonl('training_data/valid.jsonl')
# Convert to pandas DataFrame
train_df = pd.DataFrame(train_data)
valid_df = pd.DataFrame(valid_data)
# Convert DataFrame to Huggingface Dataset
train_dataset = Dataset.from_pandas(train_df)
valid_dataset = Dataset.from_pandas(valid_df)
# Example of processing
# train_texts = [example['text'] for example in train_dataset]
# valid_texts = [example['text'] for example in valid_dataset]
instruct_tune_dataset = {
"train": train_dataset,
"test": valid_dataset
}
...
def create_prompt(sample):
"""
Update the prompt template:
Combine both the prompt and input into a single column.
"""
bos_token = "<s>"
original_system_message = "table:"
system_message = "Use the provided input to create an instruction that could have been used to generate the response with an LLM. The query dialect is Athena/Presto. The database table used is: "
question_and_response = sample["text"].replace(original_system_message, "").replace("Q:", "\n\n### Input:\n").replace("A:","\n### Response:\n").strip()
eos_token = "</s>"
full_prompt = ""
full_prompt += bos_token
full_prompt += "### Instruction:"
full_prompt += "\n" + system_message
full_prompt += "\n" + question_and_response
full_prompt += eos_token
return full_prompt
...
from trl import SFTTrainer
trainer = SFTTrainer(
model=model,
peft_config=peft_config,
max_seq_length=max_seq_length,
tokenizer=tokenizer,
packing=True,
formatting_func=create_prompt, # this will apply the create_prompt mapping to all training and test dataset
args=args,
train_dataset=instruct_tune_dataset["train"],
eval_dataset=instruct_tune_dataset["test"]
)